VoxiHost - Notice history

All systems operational

Notice history

2025/12

Infrastructure failure
  • Resolved
    Resolved

    This incident has been resolved.

  • Update
    Update

    We are currently finalizing our new Customer Panel. While the setup is taking a bit longer than expected, we remain partly operational accepting orders and delivering VPS services.

    I expect to launch the new panel by the end of this week!

    Best regards, Daniel Marszałkowski

    CEO VoxiHost

  • Update
    Update

    I have deployed our front-page back and working! (New Design)
    I'm still working on bringing back the customer panel!

    Best regards, Daniel Marszałkowski
    CEO VoxiHost

  • Update
    Update

    Hey, just wanted to drop quick update on the situation! I'm almost done with implementing new front-page, on the backend stuff i already setuped multiple backup solutions that preform global node backup every 1 day to prevent such situations...

    Best regards, Daniel Marszałkowski
    CEO VoxiHost

  • Update
    Update
    We are continuing to work on a fix for this incident.
  • Update
    Update

    Emails were delivered from recovery@voxihost.pl, if you don't see one check your SPAM!

  • Update
    Update

    Update!

    Working on sending emails to all customers. Please check your inboxes within 2-5h.

  • Update
    Update

    Update on the current situation

    I have analyzed all VM disk dumps and, as mentioned before, I was only partially able to recover data from them. I am currently on a train to spend New Year's Eve with someone close to recharge. Before leaving, I managed to restore our mailing system, which means you can now contact us again at support@voxihost.pl. I have also created a dedicated email for recovery and compensation: recovery@voxihost.pl. From this address, users will receive more information regarding the next steps and their data. If I find the time, I will start sending informational emails within the next 2-3 hours, but I cannot promise that yet. I will do my best to be back at work as soon as possible, likely late on January 1st or on January 2nd. I sincerely apologize again for this entire situation. I will do everything in my power to bring everything back to life and compensate everyone affected.

    Best regards, Daniel M. VoxiHost

  • Update
    Update

    Dear Customers, I am writing this with a heavy heart and complete honesty. I have been fighting this critical failure for the past 10 hours, but I have actually been on my feet for over 24 hours now. What started as a simple disk alert has ended in a broken Proxmox backend, affecting every single VPS and our own internal systems. I was deeply optimistic at first because the file sizes looked correct, but I have now discovered that many of them are severely corrupted.

    Our internal systems, including the billing panel and database, have also been hit. Since our internal backups run on a 7-day cycle and the next one was due exactly at the end of today, some recent account changes or new registrations from the last few days are currently missing. However, I want to reassure you that your funds are safe. Once everything is stable, I will manually verify all transactions through our payment providers to ensure every credit and service is correctly restored.

    Regarding your data, I am doing everything possible to recover what can be saved. For those whose data is unfortunately unrecoverable on our end, I will be sending an email shortly with a download link to your raw VPS disk image so you can perform your own analysis and potential recovery. For those whose servers were partially saved, I will provide a fresh, functional disk along with a compressed archive of your salvaged files.

    As a one-man army building this hosting dream, this situation is devastating to me. I have reached a point of total exhaustion and I need to take a short break to sleep at least 2 hours and spend some time with my family for New Year’s Day. I need to recharge my mind to avoid making any critical mistakes during the rest of this recovery process. I will be back at work in full force very soon. I cannot express how sorry I am for this catastrophe and I thank you for your patience and for staying with me during this dark hour.

    Best regards, Daniel M. VoxiHost

    I'm sorry... i will try my best to find a way for compensation once this is all done.

  • Identified
    Identified

    After comprehensive diagnostics and data recovery operations, we have determined that the host system corruption is unrecoverable through repair. We are proceeding with a clean Proxmox reinstallation.

    • All customer data has been successfully backed up to secure offsite storage

    • Preliminary analysis indicates no customer VM data corruption

    • RAID arrays verified healthy - no hardware failures detected

    Root Cause: The cause of the corruption is still under investigation. Logs and diagnostics have been preserved for detailed analysis after service restoration.

    Our Priority: Restoring all customer services as quickly as possible. Full post-mortem analysis will follow once services are back online.

    ETA: Services resuming within 2-3 hours.

  • Update
    Update

    We can confirm that all customers data was safely extracted. This is yet to be confirmed later... no data on customers servers should have been affected as our alert system indicates that most VPS servers weren't touched by any error in terms of drive, and those that were went into read-only lockdown!

    I'm still looking into fixing this as fast possible.

    Daniel M.

    CEO VoxiHost

  • Update
    Update

    We are currently recovering customer files, all data is being transferred to safe-drive. After that we will continue investigating.

  • Update
    Update

    We are currently running disk check, to verify the issue.

  • Investigating
    Investigating
    We are currently investigating this incident.

2025/11

Node Reboot for Guaranteed CPU Performance Baseline
  • Update
    2025/11/04 at 23:12:00
    Update
    2025/11/04 at 23:12:00

    Hey, Daniel here!


    I wanted to add a small additional note regarding the maintenance i performed. Today, I observed some weird behavior with the CPU frequency on the server node specifically, the clock speed was dropping dramatically, sometimes as low as 400 MHz, before instantly jumping back up to 4 GHz+.

    I traced this issue to the default Energy Performance Preference (EPP) settings, which were aggressively trying to save power during minimal idle times. While good for laptops, on a server with many running VPS instances, this rapid frequency oscillation was causing a noticeable increase in latency and unpredictable performance.

    I decided to fix this immediately, performing the necessary kernel changes overnight to only affect a small customer base during EU hours.

    As my goal is to guarantee the best, most consistent performance possible for all our VPS customers, I have overridden the default power profile. The server is now locked to a high-performance profile with a guaranteed minimum baseline of 4.2 GHz, ensuring zero lag and maximum stability for your services.

  • Completed
    2025/11/04 at 23:11:00
    Completed
    2025/11/04 at 23:11:00

    The scheduled infrastructure maintenance is now complete. The Node's CPU control settings were updated successfully.

    Maintenance Summary

    The system's power-saving profile was switched to a guaranteed high-performance profile.

    • Minimum Frequency (Baseline): 4.2 GHz

    • Maximum Frequency (Turbo): 5.759 GHz

    Benefit Summary

    • Zero Latency Jitter: The deep idle state was eliminated, ensuring services receive instantaneous power when requested.

    • Predictable Consistency: The Node now maintains a 4.2 GHz baseline, minimizing performance variance across all virtual machines.

  • Update
    2025/11/04 at 23:10:00
    Update
    2025/11/04 at 23:10:00

    We are making sure that configuration was applied correctly, all VPS servers are back working.

  • In progress
    2025/11/04 at 23:00:01
    In progress
    2025/11/04 at 23:00:01
    Maintenance is now in progress
  • Planned
    2025/11/04 at 23:00:00
    Planned
    2025/11/04 at 23:00:00

    Impact: Users will experience a brief, single service interruption as the server node performs a necessary reboot.

    Reason for Reboot: We are updating the server's kernel configuration to switch the CPU from an energy-saving profile to a performance-guaranteed profile. This crucial change ensures the system strictly maintains a 4.2 GHz minimum clock speed for predictable, low-latency VM performance. Crucially, the system retains the ability to dynamically boost up to 5.7 GHz when needed for demanding tasks.

    We apologize for the brief downtime and are committed to maximizing the reliability of your services.

2025/10

NeoProtect, outage
  • Postmortem
    Postmortem

    A major service interruption occurred starting at 21:45:00 on October 30th. The root cause was an external, critical failure at our former DDoS Protection vendor, NeoProtect Remote Shield.

    NeoProtect's upstream provider, CDN77 / Datapacket, disabled all active BGP sessions towards the NeoProtect network, instantly causing all services protected by their Remote Shield to become unrouteable and unavailable.

    Our resolution involved an emergency network migration:

    1. We attempted to route raw network traffic via Amsterdam as a temporary fix, but complex routing issues prevented immediate deployment overnight.

    2. We successfully established contact with a new DDoS Protection partner, PletX.net, and performed a full, critical network swap.

    3. Service was fully restored at 13:53:24 on October 31st.

    Outcome: VoxiHost has permanently migrated to PletX.net for DDoS protection as NeoProtect has since discontinued its Remote Shield service. We are implementing a raw-network standby failover with our DC provider to prevent future external vendor failures from causing extended downtime.

  • Resolved
    Resolved

    This incident has been resolved. As from now on we will be using PletX.net DDoS Protection. Soon there will be postmortem about this incident and more information for the summary.

  • Monitoring
    Monitoring

    We are currently, on PletX protection & networking. Everything should be back to normal!

  • Update
    Update

    We are looking forward into having this issue solved in the morning EU time, we will be trying to get it replaced as soon as possible once PletX team wakes up and can preform the setup. This issue with NeoProtect wasn't expected at all, since their provider pulled a 'plug' on them and disconnected BGP sessions leaving us completely stranded and many other companies.

    We hope for full understanding from our customers, and we will take all important steps to provide compensation & make stuff right after this hard time...

  • Update
    Update

    We are awaiting for PletX.net routing & whitelist to be configured, to replace NeoProtect.

  • Update
    Update

    Provided information by NeoProtect: "Our Upstream CDN77 / Datapacket has deactivated all BGP sessions towards our network. This results in full downtime of all associated services such as all Remote Shield customers (aside from XC customers in AMS).
    The ETR given by them at this time is "tomorrow", we are trying to get this reconsidered but we do not estimate for reconsideration to happen in our favor."

    We are trying our best to get around this and deploy PletX DDoS Protection with will replace NeoProtect

  • Identified
    Identified

    We have identified to be a global outage from NeoProtect

    https://status.neoprotect.net/incidents/kdmtx0wk3h1l

  • Investigating
    Investigating
    We are currently investigating this incident.

2025/10 to 2025/12

Next