There are no Open Network Issues Currently
VPS Node Cypher - Unreachable (Resolved) Critical

Affecting Server - VPS Hosting

  • 22/09/2024 19:19
  • Last Updated 24/09/2024 21:09

POSTED @ 19:19 PM BST - Sunday 22nd September:
LAX VPS Node "Cypher" is unreachable.

We're looking into this.

UPDATE @ 19:41 PM BST - Sunday 22nd September:
This server is experiencing issues with its Hardware RAID card. We are investigating.

UPDATE @ 20:30 PM BST - Sunday 22nd September:
The RAID issue in question is not something we have seen before. Additionally our upstream provider have not seen this particular issue before, and likewise with a trusted server management company we work with for certain tasks.

In situations like this, it's important that there is no guess work and that we work based on facts.

As such, we have made contact with SuperMicro - the people who make the hardware. They are generally relatively quick to respond and are generally helpful, so we are hoping to hear back shortly, to gather their advise on what should be done next.

UPDATE @ 21:20 PM BST - Sunday 22nd September:
If any impacted customers would like a new replacement VPS, please open a ticket and we will deploy one.

The hope is that we will be able to get the server back online with all data in-tact, although it is not possible to guarantee that will be the case, much to our sadness.

We are still waiting for a reply from SuperMicro support at this time.

UPDATE @ 09:10 AM BST - Monday 23rd September:
SuperMicro have just emailed us to confirm that it is a "definite hardware issue with the storage controller cache memory (L2/L3 Cache error)" and that the only solution is to replace the RAID controller.

This is not the news we were hoping for, though is something we feared may be the case.

We are actively working with SuperMicro to gather their advise on the next steps and will share an update shortly.

UPDATE @ 11:15 AM BST - Monday 23rd September:
Our hardware vendor, SuperMicro, have advised that we bring a new server online and move the SSDs over to it. 

They advised there is a high probability this process will be smooth, with no data loss, however that cannot be guaranteed.

We are now working with our data center to arrange this, and we will provide a further update here as soon as we know more.

Once again, if any impacted customers would like a new replacement VPS, please open a ticket and we will deploy one.

UPDATE @ 17:53 PM BST - Monday 23rd September:
The new hardware that we need (RAID card) is set to be installed in the server on Tuesday.

Once installed, the hope is that we can bring the server back online with no data loss.

We will update this post as soon as more information becomes available.

UPDATE @ 09:56 AM BST - Tuesday 24th September:
The RAID card is set to be installed today at noon (Los Angeles, PST).

UPDATE @ 21:07 PM BST - Tuesday 24th September:
I'm extremely relieved to say that the server is back online. There was no file system damage, the server booted up with no issues following the RAID card replacement.

I apologise for the downtime, which was prolonged due to the fact the incident occurred on a Sunday evening and we had to contact SuperMicro for assistance, who only replied yesterday (Monday).

Stability issues - cello.cleannameservers.com (Singapore shared) (Resolved) Critical

Affecting Server - Cello

  • 22/06/2024 11:01 - 24/06/2024 14:37
  • Last Updated 24/06/2024 14:37

POSTED @ 11:01 AM BST:

We are aware of stability issues for Singapore shared hosting server cello.cleannameservers.com 

We are working to resolve this.

UPDATE @ 11:30 AM BST:

We are continuing to work on this issue, which stems from a DDoS attack that is not properly being filtered.

UPDATE @ 11:53 AM BST:

We have removed the site being hit by this attack from our network. We are hoping that the attack itself will subside very shortly.

UPDATE @ 12:34 PM BST:

This is now resolved. We took this opportunity to update the server's kernel as well, and are now seeing much lower memory usage and overall better performance/stability.

UPDATE @ 17:29 PM BST - 23/06/2024:

Sadly, the attack came back, and is on-going, despite the fact we removed the target domain from the server yesterday. We are working with our upstream provider to further mitigate the issue, however their support is only available Monday - Friday, and is not the quickest. 

We are offering a migration to either London, UK or New Jersey, USA, for any customers who may wish to not be impacted any further by this.

UPDATE @ 18:27 PM BST - 23/06/2024:

We have managed to mitigate the vast majority of the impact this was causing, and websites are now loading much quicker.

Please note the cello.cleannameservers.com hostname will not resolve for the mean time, so please use https://15.235.183.163:2083/ to access cPanel, or https://15.235.183.163:2096/ to access Webmail.

We aren't declaring this resolved just yet, this post will be kept updated.

UPDATE @ 22:06 PM BST - 23/06/2024:

The DDoS attack is once again a problem.

Every effort is being made to try and resolve this as quickly as possible.

We apologise for the troubles caused.

UPDATE @ 07:18 AM BST - 24/06/2024:

We have taken additional steps to block the attack and services are much more stable.

UPDATE @ 14:37 AM BST - 24/06/2024:

This issue is now fully resolved. We do not expect any further issues. Our apologies for the troubles caused.

Email Delivery Issues - cPanel Web Hosting (Resolved) Critical
  • 14/05/2024 11:46 - 15/05/2024 12:09
  • Last Updated 15/05/2024 12:08

POSTED @ 11:46 AM BST  - 14/05/2024:
We use a third-party company ( https://www.mail.baby/ ) to handle outbound email transactions, who are specialists in this particular field. 

Today, 14/05/2024, we have received reports from our customers of an error that is occurring, like so:

550 Content block if this is a false positive contact your provider with error NS see https://mail.outboundspamprotection.com/mailinfo?id=xxx

We have engaged with Mail Baby and they are aware of the issue, and working to fix this.

Our apologies for the troubles caused.

This post will be updated as and when news becomes available.

UPDATE @ 17:17 PM BST - 14/05/2024:
Our apologies for the lack of updates on this matter.

Sadly, the Mail Baby team are being quite slow to solve the problem, despite the fact we raised the issue a few hours ago to the CEO of their company, who is aware of the problem.

They are tracking the issue at https://interserver.statuspage.io/incidents/3cw6mg59g2t0

We may soon disable Mail Baby if there is no improvement, and route email locally from our servers instead.

UPDATE @ 18:10 PM BST - 14/05/2024:
We have now disabled Mail Baby due to no improvement.

All emails will now be sent directly from the server that hosts websites.

This may result in decreased email delivery rates (i.e. emails going to junk/spam), however it is more ideal than emails not delivering at all.

As soon as Mail Baby resolve the issue, we will route emails via their platform again.

UPDATE @ 12:07 PM BST - 15/05/2024:
We have now re-enabled Mail Baby as the issue is resolved.

More information about what went wrong, and what is being done to prevent a future issue of this nature, can be found at https://interserver.statuspage.io/incidents/3cw6mg59g2t0

Reboot of UK shared hosting server reggae.cleannameservers.com (Resolved) Critical

Affecting Server - Reggae

  • 18/03/2024 12:13 - 18/03/2024 12:44
  • Last Updated 18/03/2024 14:19

We are performing maintenance on UK shared hosting server reggae.cleannameservers.com

This is to correct an issue concerning disk space limits not being properly enforced, which has resulted in stability issues in recent days.

The server will be briefly inaccessible during this time.

UK Shared Server - Reggae (Resolved) Critical

Affecting Server - Reggae

  • 16/03/2024 00:52
  • Last Updated 16/03/2024 01:03

We are aware that UK shared server Reggae is not responding well.

We're looking into this.

UPDATE - This is now fixed. We'll be performing further maintenance tomorrow to prevent this happening again.

Performance/stability degradation - Cello (Resolved) Critical

Affecting Server - Cello

  • 08/02/2024 10:34
  • Last Updated 08/02/2024 11:25

We are aware of performance/stability degradation on Singapore shared hosting server "Chello".

This is under investigation.

UPDATE @ 11:25 AM GMT: This was the result of a HTTP Flood, which has now been mitigated.

Stability issues - Cello (Resolved) High

Affecting Server - Cello

  • 15/12/2023 16:40
  • Last Updated 16/12/2023 17:05

We are aware of HTTP stability issues on Singapore shared hosting server Cello

At this time, the cause of the problem is not entirely clear. We have so far updated the operating system's kernel and rebooted, but it has not helped.

This post will be kept updated.

UPDATE @ 1810 GMT: We have engaged with cPanel technical support, who were unable to identify the cause of the problem. For the last few hours we have been engaged with LiteSpeed's support, who are also struggling to identify the cause of the problem. Work continues to investigate. We apologise for the troubles caused.

UPDATE @ 1916 GMT: Sites are back online now. We see that LiteSpeed's support team has set up a debug build of LiteSpeed on the server. We are awaiting their reply with more information.

UPDATE @ 1954 GMT: LiteSpeed technical support figured out the problem. This was caused by a domain on the server receiving traffic at such a rate that it overloaded the pipe logger processing speed, causing LiteSpeed to buffer the data. LiteSpeed are implementing protection against this issue into the product to prevent a future instance of this.

UPDATE @ 1636 GMT - Saturday 16th: The problem has resurfaced, even with the account responsible terminated. We are urgently awaiting LiteSpeed support's response on this matter.

UPDATE @ 1703 GMT - Saturday 16th: The problem seems to have settled down again. We are looking into disable access logs temporarily until LiteSpeed can prepare a proper software update to resolve this.

VPS Control Panel (instancecontrol.com) Maintenance (Resolved) Critical

Affecting Server - VPS Hosting

  • 17/11/2023 11:51 - 17/11/2023 13:37
  • Last Updated 17/11/2023 11:53

We are performing maintenance on our VPS Control Panel (instancecontrol.com), and as such, it is currently offline.

It will be back online within the next 60-90 minutes.

Prefix issues (Resolved) Critical

Affecting Server - VPS Hosting

  • 16/11/2023 07:43
  • Last Updated 16/11/2023 10:50

The following subnets are currently having accessibility issues:

89.116.171.0/24
89.117.0.0/24 
89.117.96.0/24

We are working on resolving this issue as a priority.

UPDATE @ 10:43 AM GMT:

We are still working to resolve this issue. If you would prefer an IP swap to get back online sooner, please open a support ticket.

UPDATE @ 10:50 AM GMT: 

This is now resolved. We are implementing new measures immediately to prevent a reccurence of this issue, and apologise for the troubles caused.

Emergency Migrations (Resolved) Critical

Affecting Server - VPS Hosting

  • 30/07/2023 10:05
  • Last Updated 25/08/2023 21:52

An email notification was sent regarding the below:

UPDATE @ 13:30 BST - 2nd August 2023 - We have resumed these migrations, and will keep this page updated.

Hello,

Due to stability issues upstream which are out of our control, we are conducting emergency migrations of all VPS customers with services in the USA, who are being moved to a new data center.

The scale of the task at hand means it is not possible for us to contact each customer with their new IP space.

Please login to the VPS control panel at https://instancecontrol.com to see your new IP space.

If you have an IP in the below subnet, you have been successfully migrated:

84.x.x.x
86.x.x.x
89.x.x.x

Completed VPS Nodes:

Helios
Flux 
Zenith 
Cypher 
Lumos 
Apex 
Eon 
Nebula 
Synergy 
Quantum 
Titan 
Pulse 
Nova 
Spark 
Orion 
Aether 
Arcane 
Odyssey 
Solstice
Astral 
Catalyst 
Radiant 
Vertex 

We sincerely apologize for the troubles caused here.

We ask that non-urgent tickets are deferred until this work is complete.

More information will be made available once the migrations are complete.

Kind Regards,
George

LAX VPS Node Guitar - Offline (Resolved) Critical

Affecting Server - VPS Hosting

  • 09/08/2023 11:15 - 09/08/2023 13:34
  • Last Updated 09/08/2023 11:17

LAX VPS Node "Guitar" is currently offline due to bad PDU in the rack.

We are awaiting on-site technicians to check this. 

VPS Node Piano - Offline (Resolved) Critical

Affecting Server - VPS Hosting

  • 29/07/2023 08:12 - 29/07/2023 23:19
  • Last Updated 29/07/2023 22:55

VPS Node "Piano" is offline. It appears the switch is offline.

We are investigating.

UPDATE @ 10:07 AM BST: The New Jersey data center experienced another power issue last night, and one rack is having issues. Technicians are working on it.

UPDATE @ 22:55 PM BST: This is now resolved.

Outage - LAX & NYJ (Resolved) Critical

Affecting Server - VPS Hosting

  • 28/07/2023 23:24 - 29/07/2023 02:49
  • Last Updated 29/07/2023 02:06

We are aware that NYJ and LAX are offline. 

This is being investigated.

UPDATE @ 01:28 BST: Services are coming back online. We will be sending out an email shortly with more information.

UPDATE @ 02:06 BST: Everything is back online. We will be sending out an email shortly with more information.

New Jersey Outage (Resolved) Critical

Affecting Server - VPS Hosting

  • 10/07/2023 23:16 - 15/07/2023 09:05
  • Last Updated 15/07/2023 09:05

We are aware of a network issue in New Jersey. 

UPDATE - 10th July: Everything is now back online. It is not yet clear what happened. What we know so far is that whilst all New Jersey servers became inaccessible (network), some of those also rebooted. Our upstream provider are conducting an investigation. An RFO (Reason For Outage) will be made available within 24 hours via support ticket. We apologize for the troubles caused.

UPDATE - 10th July: It appears the issue has resurfaced. We're working with our upstreams to investigate.

UPDATE @ 23:50 BST - 10th July: There is a facility-level issue impacting the whole building. We are awaiting updates.

UPDATE @ 00:05 BST - 11th July: We are still awaiting an update from the facility.

UPDATE @ 00:24 BST - 11th July: The most recent update we have from the facility is as follows:

We are currently experiencing a sitewide power issue related to our UPS systems at EWR1. We have site personnel investigating the issue. We have called our UPS vendor who is currently in route to the site to assist us as well. We will send updates every 15 minutes as we make progress on resolution.

UPDATE @ 00:55 BST - 11th July: We continue to await an update from the facility. Our sincere apologies for the downtime here.

UPDATE @ 01:05 BST - 11th July: This has not been officially confirmed by the facility, but another tenant in the facility has advised:


We've been informed that an electrical room experienced a fire, was put out by retardant, and the datacenter is in emergency power off status at the requirement of on-site fire fighters.

Ethernet Servers cannot verify this information.

We will share updates as soon as possible. 

UPDATE @ 02:15 BST - 11th July:

An isolated fire in an UPS in an electrical room was detected and put out by fire suppression. The local fire department arrived on the scene, and per NEC guidelines and likely local laws and general best practices for firefighters, cut the power to the building. This caused the down -> up -> down cycle noted earlier today. Current state is that datacenter electricians are on site awaiting access to the building to perform repair work to the UPS, but are currently waiting permission from the fire department to enter the building. Once the electrical work is complete, the power will be applied to HVAC to subcool the facility, which will take an estimated 3-4 hours, and at that point, power will be restored to data halls, which will bring our network and servers back online. The datacenter manager gave a best-case ETA of tomorrow morning, July 11th, for power to be restored to data halls.

UPDATE @ 05:12 BST  - 11th July:

Our data center have shared the following update with us:

Power remains off at the data center per the local Fire Marshall.

After reviewing the site, the Fire Marshall is requiring that we extensively clean the UPS devices and rooms before they will allow us to re-energize the site. We have a vendor at the site currently who will be performing that cleanup. 

We will provide an update at 8:00AM EDT unless something significant changes overnight.

UPDATE @ 08:25 BST  - 11th July:

We have received numerous requests asking of an ETA, but we nor the data center have that information at this time.

Ultimately, it is up to the authorities to decide when power may be switched back on, subject to the cleaning/repair process being completed to their satisfaction.

UPDATE @ 12:06 BST  - 11th July:

I would like to express my deepest apologies for the troubles here. During the ten years I've been running Ethernet Servers, there's never been anything like this happen before. It is a deeply regretful and saddening situation to be in. Whilst there's always spare servers and parts available, it's not normal to factor in spare data center buildings as circumstances like this are extremely rare - something most hosting providers and customers will never experience. 

Currently, there is still no update from EWR1 / Evocative (the data center), but they previously stated they will provide an update at 8:00AM EDT, which is in an hour. 

As soon as more information becomes available, it will be shared here. 

UPDATE @ 12:39 BST - 11th July:

The building have just shared the following update with us:

We expect the cleanup to take place at 8am and the fire marshall to inspect afterward, We will keep you updated along the way.

UPDATE @ 14:09 BST - 11th July:

The building have just shared the following update with us:

Our remediation vendor and our team has worked through the night to clean the UPS' at the request of the fire marshal. They have made significant progress and we hope to have the cleaning completed by mid-day, at which time we will engage the fire marshal to review the site.  Following their review, we hope to get a sign off from them so that we can start the reenergizing process. The reenergizing process can take 4-5 hours, as we need to turn up the critical infrastructure prior to any servers. 

UPDATE @ 16:29 BST - 11th July:

The building have just shared the following update with us:

Our cleaning vendor has been working diligently in cleaning the affected areas of the site and is currently around 40% complete. Evocative staff will be meeting with the local township at 2PM EDT to do a walk through of the site to re-energize the facility. We will provide an update after this meeting regarding power restoration status.

UPDATE @ 17:22 BST - 11th July:

The building have just shared the following update with us:

The EWR Secaucus data center remains powered down at this time per the fire marshal. We continue to clean and ready the site for final approval by the fire marshal in order to re-energize the facilities critical equipment. Site management, the fire marshal, and electrical contractors will be meeting at 2PM EDT in an attempt to receive approval from the fire marshal to re-energize the site. We do not foresee any issues that would result in not receiving such approval.

Re-energizing critical equipment will take 4-5 hours. After this process, we will be energizing customer circuits and powering on all customer equipment. We will provide updates as to when customers will be allowed in the facility once approved by the fire marshal.

UPDATE @ 19:40 BST - 11th July:

The building have just shared the following update with us:

The EWR Secaucus data center remains powered down at this time per the fire marshal. 

Site management, the fire marshal, and electrical contractors are currently meeting to review the process of the cleaning effort to get approval from the fire marshal to re-energize the site. 

We will update you as soon as the meeting has concluded.

UPDATE @ 21:27 BST - 11th July:

The building have just shared the following update with us:

We have just finished the meeting with the fire marshal, electrical inspectors, and our onsite management. We have made great progress cleaning and after reviewing it with the fire marshal, they have asked us to clean additional spaces and they have also asked us to replace some components of the fire system. They have set a time to come back and review these requests at 9am EDT Wednesday. We are working to comply completely with these new requests with these vendors and are bringing in additional cleaning personnel onsite to make the fire marshal's deadline.

In preparation for being able to allow clients onsite, the fire marshal has stated that we need to perform a full test of the fire/life safety systems which will be done after utility power has been restored and fire system components replaced. We have these vendors standing by for this work tomorrow.

Assuming that all goes as planned, the earliest that clients will be allowed back into the site to power up their servers would be late in the day Wednesday.

We are actively working on deploying new hardware in a different data center to try and get things online sooner for our shared hosting customers, of whom we maintain daily off-site backups and can restore more quickly.

VPS customers, the situation is more complex due to the large number of IP addresses we require, the larger server builds and the sheer amount of data involved. 

UPDATE @ 22:57 BST - 11th July:

For our shared hosting customers: we are working on installing the operating system on a new server in a different data center (Equinix) and will begin restoring our backups shortly.

For our VPS hosting customers: we will begin deploying new hardware in our Los Angeles facility and offering customers a new VPS in that facility. It will take time and we will provide an update as soon as possible.

We ask that customers refrain from creating tickets as our priority is on getting everyone back online as quickly as we can, and replying to tickets slows down that process.

UPDATE @ 07:51 BST - 12th July:

The building have shared the following update with us:

The EWR Secaucus data center remains powered down at this time per the fire marshal. We are continuing with our cleanup efforts into the evening and working overnight as we make progress towards our 9AM EDT meeting time with the fire marshal and electrical inspectors in order to reinstate power at the site.

Once we receive approval and utility is restored, we will turn up critical systems. This will take approximately 5 hours. After the critical systems are restored, we will be turning up the carriers and then will start to turn the servers back on.

The fire marshal has requested replacement of the smoke detectors in the affected area as well as a full site inspection of the fire life safety system prior to allowing customers to enter the facility. Assuming that all goes as planned, the earliest that clients will be allowed back into the site to work on their equipment would be late in the day Wednesday.

UPDATE @ 11:38 BST - 12th July:

Unfortunately, the new provider we chose to facilitate our shared hosting customers deployed a server with faulty disk drives...

We no longer consider that a viable move given that we're two hours away from the fire department's inspection and hopeful restoration of power within a few hours of the inspection taking place. 

UPDATE @ 15:20 BST - 12th July:

We have not yet received an update from the data center, despite emails being sent to request an update. 

As soon as an update is available, it will be shared here.

UPDATE @ 15:29 BST - 12th July:

The latest update from the data center:

I heard the preliminary inspection is good and we are taking steps to energize the property now. 

I’m waiting for the official update from DC Ops.  More to come.

UPDATE @ 17:40 BST - 12th July:

The latest update from the data center:

We have completed the full site inspection with the fire marshal and the electrical inspector and utility power has been restored to the site. 

We are now working to restore critical systems and our onsite team has energized the primary electrical equipment that powers the site. Concurrently, we are beginning work to bring the mechanical plant online. Additional engineers from other facilities are on site this morning to expedite site turn up.

The ETA for bringing up the critical infrastructure systems is approximately 5 hours.

UPDATE @ 18:51 BST - 12th July:

The latest update from the data center:

Our onsite team has energized the primary electrical equipment that powers the site, enabling us to bring our mechanical plant online. We are currently cooling the facility.

As we monitor for stability, we are focused on bringing up our electrical systems. In starting this process, we have identified an issue with powering up our fire panel as well as power systems that were powered by UPS3. While this will cause us a delay, we are working with our vendors for remediation.

We are currently at 25% for completion toward bringing the site back online and the revised ETA for bringing up the critical infrastructure systems is approximately 7 hours. We are still planning for an evening time frame when clients will be able to come back on site. We will send out additional information regarding access to the facility and remote hands assistance and we will notify you once client access to the facility is permitted. 

UPDATE @ 21:38 BST - 12th July:

The latest update from the data center:

Our onsite team is currently bringing our UPS Systems online. We have our UPS vendor onsite assisting us with this. We have brought UPS-4 online and are currently charging it's associated battery system. While bringing UPS-R online we have run into a minor issue that we are currently investigating.  

We are currently at 35% for completion toward bringing the site back online and the revised ETA for bringing up the critical infrastructure systems is approximately 5 hours. We are still planning for an evening time frame when clients will be able to come back on site. 

In process with our power system re-energizing, we have been working on our fire system as well. We are currently sourcing materials to bring our fire system fully online and do not have an ETA for completion. Because of this and fire marshal compliance we will only be allowed to have supervised escorted customer access when we finish bringing up the critical infrastructure systems. We are currently sourcing additional personnel to assist us with this escort policy. 

UPDATE @ 01:21 - 13th July:

All nodes are back online, with the exception of Techno, Piano and Fife, which are giving file system errors. We are looking into this.

UPDATE @ 02:05 - 13th July:

The file system damage on Techno is now fixed, and the server is back online.

File System Checks (FSCKs) are running on Piano and Fife.

UPDATE @ 03:19 - 13th July:

Piano's FSCK has completed and it's back online.

The FSCK on Fife is on-going.

UPDATE @ 07:58 - 13th July:

Fife's FSCK is on-going.

We are working through tickets at this time.

An email will be sent to every customer in New Jersey with more information about the events, and our plan moving forward.

UPDATE @ 12:09 - 13th July:

With regret, the FSCK on Fife is still going. We do not know if the node will be recoverable and/or what state the data will be in.

We will provide another update when the situation changes.

UPDATE @ 20:30 - 13th July:

Whilst Fife is still file system checking, if anyone on this node would like a new VPS, please create a support ticket. It would not contain any data, though.

UPDATE @ 11:27 - 14th July:

There is currently no change from the previous update, unfortunately.

Please note that support ticket replies are delayed due to the much larger volume of requests.

On average, prior to this incident, there's around 50-70 tickets in a 72 hour period. As of this moment in time, there's around five times that number. 

Additionally, a follow-up email regarding this incident and our plans moving forward is still planned to be sent, but the priority as of this moment in time is trying to normalize the ticket queue as much as possible. 

UPDATE @ 09:04 - 15th July:

Unfortunately, the file system check has not successfully recovered the node.

We are asking impacted customers to contact us, and we will supply a new VPS.

Our apologies for the inconvenience caused here. 

Port 22 Restriction Implemented (Resolved) Critical

Affecting Server - VPS Hosting

  • 19/09/2021 17:14 - 20/10/2021 12:14
  • Last Updated 19/09/2021 18:18

SSH brute-forcing is a constant problem for us, and every other hosting provider. This is the act of bots that scan IP ranges and try to gain unauthorized access to servers through password lists.

Ethernet Servers deploys as many countermeasures as possible, such as generating completely random root passwords for every single VPS that is deployed. We also frequently update our operating system templates which consists of removing those that are EOL (End Of Life) and no longer supported by their developers and ensuring that the latest updates are installed out-the-box (so, for example, you don't end up with an OpenSSH server that is months or years old and missing security fixes). 

Unfortunately, despite these efforts, we still run into cases where customer VPS' are compromised, which usually results in one or more of the below scenarios:

- The VPS is used maliciously for outbound DDoS attacks, wasting bandwidth and contributing to the already massive (and worldwide) botnet problem. 

- The VPS is not used for DDoS, but crypto mining (which is a violation of our Terms of Service). This in itself is a big problem as it is usually extremely resource intensive and best suited for environments with dedicated CPU power, like dedicated servers.

- The VPS IP address(es) become(s) blacklisted. This is another frequent issue. Not only is this timeconsuming for us to clean up, but it degrades our relationship with our upstream providers (most IP vendors or hosting providers aren't too happy with their assets being used maliciously or abusively). 

Those are just a few problems, but there are many more, for example, the additional work and stress that is created for customers impacted by such incidents. Not many people will want to wake up on a Monday morning to news that their service has been suspended for abuse. We also commonly see customers ask us why there have been hundreds, thousands, or more, failed SSH login attempts on their server. We created a knowledgebase article on this a few years ago, here

As such, we have made a decision, a decision that is by no means common practice in the industry (but perhaps, should be). We aren't afraid to stand out from the crowd in order to better the security of our products. 

Inbound port 22 has now been disabled platform-wide for all VPS customers.

Outbound port 22 remains open as before.

We understand that this decision may seem over-the-top and/or inconvenient, but with the reasons outlined above, we hope that there is understanding that this change is ultimately with the best interests of maintaining a secure and stable VPS platform. Managed VPS customers are not impacted by this change, as we've long since deployed those servers with custom SSH ports. Some customers may already run SSH on other ports, and others might not use it at all (for example, those who used SSH just once to install a web-based control panel, and never used it since). Therefore, we hope the overall impact on our customers is not too significant.

It should be noted that this change doesn't prevent the problems described above, nor does it make a VPS immune to an SSH compromise, but it will significantly help to keep things on the straight and narrow. In our experience, most of the bots described only check port 22. If there's no reply on port 22, they don't tend to look any further.

What should I do now? I still want to access SSH!
You're still welcome to use SSH, but you will need to configure it to run on another port (not 22) - if you don't do so already. The file you'll want to modify is /etc/ssh/sshd_config. This might be /etc/ssh/ssh_config in some instances. Simply find #Port 22 and replace it with Port <port number> and then run service sshd restart or, in some cases, service ssh restart. You may also need to open the port in iptables/firewalld/ufw depending on your setup. Exactly which port you choose is up to you, but you'll want to make sure it's not already in use on your server (you can use something like netstat -lntu to check what ports are being used already). With that said, we'd suggest choosing a port with at least 4 (four) numbers/digits. 

SSH isn't working now, how can I make the change?
You may use the HTML 5 Serial Console. This can be found by visiting My Services and clicking on your VPS. Alternatively, it can be found in SolusVM (the login details of which can be found in the email from us titled "VPS Welcome Email"). 



Again, we know that blocking the one port that is the foundation of all things Linux and accessing a Linux server isn't an industry-standard practice, but in this instance, we feel that the benefits of making the change outweigh the benefits of not making it. 

We thank you for your business and are happy to help you make the port change or answer any other questions that you may have.



Ethernet Servers Ltd
124 City Road
London
EC1V 2NX
United Kingdom


Registered Limited Company: #09114946

Telephone:
+44 330 043 1258

Email:
hello@ethernetservers.com

Copyright © 2014 - 2024 - Ethernet Servers Ltd - All Rights Reserved.

Proudly serving customers in 143 countries since July 2014!