Webarchitects server status updates — if there are any issues with our servers or services we will post updates to this page, we also use the #webarch
channel on irc.libera.chat
, there is a web interface for #webarch chat and our office phone number is 0114 276 9709.
We are upgrading all our instances of ONLYOFFICE to version 8.2.2, this will result in the ONLYOFFICE services being unavailable for a little while.
We are going to shutdown host3.webarch.net
, webarch1.co.uk
and webarch2.co.uk
in order to add some additional disk space to the servers, this will cause them to be unavailable for a few minutes.
We are making a start on the upgrade of webarch3.co.uk
, webarch5.co.uk
and webarch6.co.uk
from Debian Bullseye to Debian Bookworm, this will take several hours and during this time websites hosted on these servers will be unavailable and / or not working for periods, we are sorry for the inconvience this will cause.
We are upgrading GitLab at git.coop
to apply GitLab Patch Release: 17.6.1, this wil result in the service being unavailable for a little while.
We are making a start on the upgrade from Debian Bullseye to Debian Bookworm for webarch2.co.uk
, webarch4.co.uk
and webarch7.co.uk
, this will result in sites on these servers being unavailable / ceasing to work for periods over the next few hours, sorry for the inconvience caused.
We are upgrading webarch1.co.uk
to Debian Bookworm, services that run on it will therefore be unavailable for periods over the next couple of hours.
We are rebooting all Debian Bookworm servers to apply [DSA 5818-1] linux security update, this will result in servers being rebooted and therefore the services they run will be unavailable for a little while.
We are upgrading host3.webarch.net
to Debian Bookworm, services that run on it will therefore be unavailable for periods over the next couple of hours.
We are shutting down webarch4.co.uk
to add some additional disk space, it will be unavailable for a little while as a result.
We are shutting down host3.webarch.net
to add some additional disk space, it will be unavailable for a little while as a result.
We are upgrading GitLab at git.coop
to version 17.6.0, this will result in the service being unavilable for a little while.
We are upgrading Mailcow server to apply the 2024-11b Update, this will result in Mailcow services being unavailable for around 10 minutes.
We are upgrading GitLab at git.coop
t0 version 17.5.2, which is, we expect, a security update, there are no release notes yet. The service will be unavailable for a little while as w result.
We are moving the file system of the GitLab instance at git.coop
to a different disk array, this requires it to be shutdown and it might take around 20 minutes before it has been moved and can be brought back up again, sorry for the downtime.
We are upgradimg all ONLYOFFICE servers to version 8.2.1, this will result in ONLYOFFICE services being unavailable for a little while.
We are updating all Mailcow instances to apply the 2024-11a Update, this will result in email being unavailable for around 10 minutes as services are restarted.
Over the next 24 hours or so we will be upgrading all ONLYOFFICE instances to version 8.2.0 and Nextcloud to version 29.0.9, this will result in a little downtime for these services as they are upgraded.
We are upgrading all Mailcow servers to the Moovember | Mailbox Rename, SOGo 5.11.1, Rspamd 3.10.2, and More release, this will result in the Mailcow servers being unavailable for around 10 or 15 minutes.
We are upgrading GitLab at git.coop
to GitLab Patch Release: 17.5.1, the server will be unavailable for a little while as a result.
We are sorry that the GitLab runners have been offline since the GitLab upgrade, we have now implemented a fix for this issue.
We are upgrading GitLab at git.coop, to version 17.5.0, the release the announcement for this upgrade has not get been published.
We are upgrading all our hosted Nextcloud instances to version 29.0.8.
We are upgrading GitLab at git.coop
to apply GitLab Critical Patch Release: 17.4.2, this will result in a little downtime for the service.
We are sorry for the eleven minute downtime that our Sheffield based services have just suffered, this was caused by a upstream connectivity issue that has now been resolved.
We are upgrading all our hosted Discourse instances to version 3.4.0.beta2, this update contains security updates, the update process will result in sites being unavailable for a little while.
We are adding disk space to webarch6.co.uk
and webarch7.co.uk
and this will require these servers to be rebooted so they will be unavailable for a moment.
We are rebooting all out hosted Debian Bullseye servers to upgrade the Linux kernel to version linux-image-5.10.0-33-amd64
this will result in services being unavailable for a little while.
We are rebooting out Debian Bookworm servers in order to apply the [DSA 5782-1] linux security update, some services will be unavailable fr a little while as a result.
We are upgrading GitLab at git.coop
to GitLab Patch Release: 17.4.1, this will cause the service to be unavailable for a little while.
We are upgrading Docker to version 27.3.1 on all our servers and this will result in services that run in Docker containers, like Discourse and Mailcow, being restarted.
We are upgrading Docker to version 27.3.0 on all our servers and this will result in services that run in Docker containers, like Discourse and Mailcow, being restarted.
We are upgrading GitLab at git.coop
to version 17.4 so the services will be unavailable for a little while.
We are upgrading GitLab at to apply the GitLab Critical Patch Release: 17.3.3, this will result in the service being unavailable for a little while.
We are upgrading hosted Nextcloud srvers to version 29.0.7, this will result in sites being put into maintenance mode during the upgrade.
We are upgrading GitLab at git.coop
to version version 17.3.2, this will result in a little downtime.
We are upgrading Docker to version 27.2.1, this will result in all services running in docker containers, including Mailcow and Discourse instances to be restarted.
We are upgrading ONLYOFFICE on all our servers to version 8.13.
We are upgrading all hosted Nextcloud instances to version 29.0.6, this will result in around 5 minutes of downtime for each instance.
We are upgrading all our Debian Bookworm and Bullseye servers to the latest point releases, 12.7 and 11.11, for Bookworm this update includes a new Linux kernel version and the update will therefore require a reboot, the downtime should be minimal, but all services running on Debian servers will be unavailable for a little while.
We are upgrading the host2.webarch.net
server from Debian Bullseye to Debian Bookworm, this will require the server to be rebooted a couple of times but the service downtime should be minimal.
We are updaing Docker to version 27.2.0 on all our servers, this will result in Mailcow and some other services being restarted.
We are upgrading all Discourse servers to version 3.4.0.beta1 : "Hot" now in default top menu items, new feature indicator, Polls can show absolute numbers and more, this will result in the services being unavailable for a little while.
We are adding some additional disk space to the webarch7.co.uk
server, this will result in a little downtime.
We are upgrading Gitlab at git.coop
to GitLab Patch Release: 17.3.1, this will result in the service being unavailable for a little while.
We are upgarding all our hosted mailcow servers to 2024-08a - Dovecot updated to 2.3.21.1 as this address two security vulnerabilities within Dovecot.
We are upgrading GitLab at git.coop
, for a yet to be announced update, so it will be unavailable for a little while.
We are upgrading Dokcer to version 27.1.2, this will result in all services that run in Dokcer containers to be restarted, specifically Mailcow and Discourse, these services will therefore be unavailable for a little while.
We are upgrading containerd.io
on servers to version 1.7.20 this will result in all services that run in Docker containers being restarted, this includes Discourse and Mailcow services. In addition we are upgrading the Linux kernel on Debian Bullseye servers to apply [DSA 5747-1] linux security update this will result in these servers being rebooted and therefore services they run will be unavailable for a little while.
We are upgrading Gitlab at git.coop
to GitLab Patch Release: 17.2.1 so it will be unavailable for a little while.
We have tracked down and addressed the problem with the Dovecot / IMAP mail.mail.coop
server and it is now working and we intend to follow up on the issues that caused this problem.
The has been a problem with the mail.mail.coop
server following the Mailcow upgrade, Dovecot / IMAP doesn't run even though the container does, we are working on this problem, no email will be lost but it means that users will not be able to read their email, email can still be sent using clients other than SOGo. We are working on this issue and hope to have a solution soon, no other Mailcow server have this issue.
We are upgrading all our hosted Mailcow servers to apply the Mooly Update 2024 | Security Update, this will cause the services to be unavailable for around 10 to 15 minutes, sorry for the inconvenience this will cause, we would ormally try to do updates of this nature out of office hours but as these updates have security impacts we are doing straight away.
We are upgrading systemd on Debian bookworm servers, this will result in services being restarted.
All the services are now up and running again, sorry for the disruption today.
We are shutting down and restarting all the virtual servers on xen5.webarch.net
, hopefully for the last time today, they will be unavailable for a little while.
We have made a mistake with the configuration on xen5.webarch.net
and are going to have to shutdown and restart all the virtual servers it runs shortly, we will post an update there when this work starts.
We are now restarting up all the servers.
We are shutting down xen5.webarch.net
now, this will impact git.coop
, mail.mail.coop
, webarch6.co.uk
and webarch8.co.uk
, at 12:30 BST to move the server they are on, we anticipate that this will take 30 minutes at the very most.
As notified last week, a little later today we are going to be shutting down and restarting xen5.webarch.net
, this server currently hosts git.coop
, mail.mail.coop
, webarch6.co.uk
and webarch8.co.uk
, this will result in some downtime, we will post further updates prior to starting this work.
We are updating the IP adresses of our internal DNS servers for all our Sheffield hosted servers, this will cause services that include these, like Docker to be restarted and theis will have the knock on effect of causing the services that run in Docker containers to be restarted.
We are upgrading all our hosted Discourse sites to version v3.3.0.beta5, this will result in the sites being unavailable for a little while.
We are upgrading GitLab at git.coop
to GitLab Patch Release: 17.2.1, this will result in a little downtime for the service.
We are upgrading Docker on all servers to version 27.1.1, this will cause services that run in Docker containers, Discourse and Mailcow, to be restarted.
On Wednesday 31st July 2024, during the afternoon, we will need to shutdown and restart the xen5.webarch.net
server, this currently hosts git.coop
, mail.mail.coop
, webarch6.co.uk
and webarch8.co.uk
, this will result in some unaviodable downtime, we will post more detailed information on the day as the work progresses.
We are upgrading Docker to version 27.1.0, so services that run in Docker containers, like Mailcow and Discourse will be restarted as a result.
We are upgrading all hosted Nextcloud instances to version 29.0.4, this will result in instances being unavailable for a little while.
We are upgrading GitLab at git.coop
to GitLab 17.2, ONLYOFFICE instances to ONLYOFFICE-DocumentServer-8.1.1 and a few other updates, some of which will require server reboots.
We are updating hosted Discourse instances to apply the 3.2.4: Security and bug fix release, this will cause Discourse instances to be unavailable for a little while.
We are rebooting Debiab Bullseye servers for a linux security update, sorry for the downtime this will result in as servers are rebooted.
We are upgrading all our hosted Discourse servers to v3.3.0.beta3, this will result in the instances being unavailable for a little while.
We are upgrading GitLab at git.coop
to GitLab Critical Patch Release: 17.1.2, it wil be unavailable for a little while as a result.
We are upgrading Docker on all servers that run services in Dokcer containers to version 27.0.3 and also Mailcow to Moone Update 2024 | Flatcurve Update Phase 1 - Revision A, this will result in Mailcow and some other services like Discourse being unavailable for a little while.
We need to add some additinal disk space to webarch7.co.uk
, this will require it to be shutdown, growing the partition and then restarting the server, this will cause services that run on this host to be unavailable for a little while.
We are upgrading all our hosted Debian Bookworm servers to the latest point release, (12.6), this will require a reboot for the latest Linux kernel, so services will be unavailable momentarily when this happens.
We are shutting down webarch8.co.uk
to move it to a different physical server, this will cause it to be unavailable for less than a minute.
We are upgrading GitLab at git.coop
to version 17.1.1, this will cause the service to be unavailable for a little while.
We are upgrading Docker on all servers to version 27.0.2, this wil cause services that run in Docker conteiners like Mailcow and Discourse to be restarted.
We and moving webarch6.co.uk
to a different physical server, this requires a shutdown and restart, it will be unavailable for a minute or so.
We are upgrading hosted Nextcloud instances to version 29.0.3 and ONLYOFFICE to version 8.1.0, this will result in the services being unavailable for a while, if there are any issues with Nextcloud as a result please get in touch.
We are upgrading Docker to version 27.0.1, this will result in all services that run in Docker containers, specifically Mailcow and Discourse, to be restarted.
We are upgradimg GitLab on git.coop
to version 17.1, this will result in a little downtime for the service.
We are upgrading GitLab, to GitLab Patch Release: 17.0.1, as a result it will be unavailable for a little while.
We are updating Docker on all server to version 26.1.4, this will cause all services that run in Docker containers, like Mailcow and Discourse, to be restarted.
We are upgrading containerd to version 1.6.33, this will result in Docker being restarted which in turn will cause Mailcow, Discourse and other services that run in Docker containers to be restarted, so these services will be unavailable for a little while.
We are upgrading all Nextcloud instances to version 28.0.6, this will result in the instaces being unavailable for a little while.
We are upgrading GitLab at git.coop
to version 17.0.1 so it will be unavailable for a little while.
We are upgrading GitLab at git.coop
to GitLab 17.0, this will result in the service being unavailable for a little while.
We wil be upgrading all hosted Discourse instances today to version 3.3.0.beta2, this will reseult in the sites being unavailable for around 15 minutes each during the upgrade process.
We are upgrading Docker on all servers to version 26.1.2, this will result in all services runnng in Docker containers, like Discourse and Mailcow, to be restarted.
We are updating GitLab at git.coop
to apply GitLab Patch Release: 16.11.1, this will result in the service being unavailable for a little while.
We are rebooting Debian Bookworm and Bullseye servers for the [DSA 5682-1] glib2.0 security update, sorry for the downtime this will cause, it shouldn't be for more than a few minutes.
We are rebooting all our Debian Bullseye servers to apply [DSA 5681-1] linux security update, this will result in services that run on these servers not being available for a little while.
We are rebooting all our Debian Bookwork servers to apply [DSA 5680-1] linux security update, this will result in services that run on Debian Bookworm servers being unavailable for a moment while servers are rebooted.
We are upgrading all our hosted Mailcow server to apply the Moopril Update 2024 | Security Update update, this will result in the service being restarted.
We are upgrading Docker on all our servers to version 26.1.1, this will result in services that run in Docker containers, including Mailcow and Discourse to be restarted.
We are upgrading GitLab at git.coop
to apply GitLab Patch Release: 16.11.1, this will cause the service to be unavailable for a little while.
We are upgrading Docker to version 26.1.0, this will result in services running in Docker containers like Discourse and Mailcow to be restarted.
We are upgrading Docker to version 26.0.2, this will cause all services that run in Docker containers to be restarted including Discourse and Mailcow.
We are upgrading GitLab at git.coop
to GitLab 16.11, this will result in the instance being unavailable for a little while.
We are updating GitLab at git.coop
to GitLab Patch Release: 16.10.3, this will result in the service being unavailable for a little while.
We are upgrading Docker on all our servers to version 26.0.1, this will result in all Mailcow and Discourse servers being restarted and therefore the services will be unavailable for a little while.
We are upgrading GitLab at git.coop
to GitLab Patch Release: 16.10.2, as a result the service will be unavailable for a little while.
We are upgrading all our hosted ONLYOFFICE servers to version 8.0.1 and Nextcloud to version 28.0.4, if you have any issues with your Nextcloud /ONLYOFFICE servers tomorrow, Monday 8th April 2024, please get in touch.
We are upgrading all our hosted Mailcow servers to apply Moopril Update 2024 | Security Update, we are doing this during office hours are this update includes a fix for a cross site scriping vulnerability and a arbitrary code execution vulnerability, Mailcow servers will be unavailable for a little while as a result.
The data centre router outage that caused the loss of connectivity for our services in Sheffield has been resolved and everything is running again as normal.
The connectivity issue with our services in Sheffield has been caused by a short power outage, an engineer is on the way to the data centre to investigate and fix the problem.
There appears to be an issue with our services hosted on our infrastructure in Sheffield, we are investigating the problem.
We are adjusting some Mailcow settings for mail.mail.coop
so it will be unavailable for a little while as services are restarted.
We are moving some servers between hosts (the shared GitLab runner server and the shared ONLYOFFICE servers) and upgrading mail.mail.coop
to add additional RAM, as a result these services will not be available for a little while
We are upgrading GitLab to GitLab Security Release: 16.10.1, as a result git.coop
will be unavailable for a while.
We are upgrading Debian from Bullseye to Bookworm and also GitLab at git.coop
to version 16.10 so the service will be unavailable at times for the next hour or so.
We are upgrading Docker to version 26.0.0 on all our servers, this will result in Discourse and Mailcow services being restarted.
We are upgrading Docker to version 25.0.5, this will result in services running in Docker containers like Discourse and Mailcow to be restarted.
We are upgrading all our hosted Discourse servers to 3.3.0.beta1, this will result in the Discourse instances being unavailable for a little while.
Very sorry about the issue this morning with the certificate for webarch.email
only being valid for mail.webarch.email
, the issue has now been fixed, the additional subjectAltNames have been added to the cert.
There has been an issue with the SSL / TLS certificate on webarch.email
this morning, we are working on the problem and it should be resolved soon. In the meantime the server can be accesed using mail.webarch.email (webarch.email
, the domain without the mail
sub-domain is missing from the certificate).)
We are upgrading Docker on all our servers to version 25.0.4, this will result in all services that run in Docker containers being restarted, specifically Mailcow and Discourse services.
We are upgrading GitLab at git.coop
to GitLab Security Release: 16.9.2, the instance will be unavailable for a little while as a result.
We are rebooting webarch6.co.uk
to add some additional RAM and CPU cores, this will result in a little downtime for the server.
we are upgrading GitLab at git.coop
to GitLab Security Release: 16.9.1, this will result in the service being unavailable for a little while.
We are sorry about the problems with the SMTP server at webarch.email
this morning, the service is now working and we are looking into the cause of the issue.
We are upgrading all our hosted Mailcow servers to apply the Febmooary 2024 Update, this will result in all Mailcow services being restarted.
We are upgrading GitLab at git.coop
to Security Release: 16.8.2, the service will be unavailable for a little while as a result.
Sorry for the disruption to the webarch.email
service this afternoon and evening, we believe that everything is working again, the cause of the problems were related to last nights DDOS attack.
The issue earlier today with Mailcow at webarch.email
and the DNS unbound container appears to have been related to the DNS settings for Docker, we are now updating the DNS configuration for all servers that run services in Docker, this will cause srvices restarts.
We are investigating an issue with webarch.email
relating to the DNS unbound container, this appears to be causing mail delivery failures, no mail should be lost but it will be delayed.
We have blocked around 22,000 IP addresses and our Sheffield services should be more-or-less back to normal now.
Our Sheffield services appear to be suffereing from a denial of service attack, we are working with the data centre to address the problem, sorry for the resulting lack of connectivity.
We are going to upgrade all hosted Nextcloud instances on virtual servers and shared hosting to version 28.0.2 which provides Nextcloud Hub 7, in addition we are upgrading ONLYOFFICE Docuemnt Server to version 8.0, we will first upgrade ONLYOFFICE and then Nextcloud, the whole process will take several hours and sevices might not be available or work as expected during this work.
We are shutting down webarch.email
to add some additional disk space, the servives provided by this server will be unavailable for a littlw while as a result.
We are updating the Mailcow SOGo containers in order to hide the CKEditor 4.22.1 version is not secure warning, this just removes the warning, we will update SOGo as soon as the project ships a version with an updated editor.
We are updating all our hosted Mailcow servers to version 2024-01e, this will result in the services being unavailable for a little while.
We are upgrading GitLab at git.coop
to version 16.8.2, this will result in the service being unavilable for a little while.
We are upgrading Docker on all our servers to version 25.0.3, this will result in all services that run in Docker contaoners being restarted, this includes Mailcow and Discourse services.
We are upgrading all out hosted Mailcow server to version 2024-01c, this will result in the services being unavailable for a little while.
We are upgrading Dokce on all our servers to version 25.0.2, this release contains multiple security fixes, the update will cause all services that run in Dokcer containers, specfically Discourse and Mailcow to be restarted.
We are upgrading all our hosted Discourse servers update them to 3.2.0.beta5: add groups to DMs, mobile chat footer redesign, passkeys enabled by default, this will result in the services being unavailable for a little while.
We are upgrading GitLab at git.coop
to install the GitLab Critical Security Release: 16.8.1, this will result in the service being unavailable for a little while.
We are upgrading Docker on all our managed server to version 25.0.1, this will result in all services that run in Docker containers being restarted, this include Mailcow and Discourse servers.
We are upgrading our hosted Mailcow servers to apply the 2024-01b update, this will result in them being unavailable for a little while.
We are upgrading our Mailcow servers to apply the 2024-01a (Release: 18th January 2024) update, this will cause a little downtime for the servers.
We are upgrading Docker on all our server to version 25.0.0, this will cause all services that run in Docker containers to be restarted, this include Mailcow and Discourse.
We are upgrading GitLab at git.coop
to GitLab 16.8 Release so the service will be unavailable for a little while
We are upgrading all our hosted Mailcow servers to apply the Janmooary 2024 Update | The Multiarch (x86 + ARM64) & Performance Update, this will result in the Mailcow servers being unavailable for a little while.
We are upgarding containerd.io
on all our servers to version 1.6.27, this will result in all services that run in Docker containers shutting down and being restarted, this includes all Discourse and mailcow servers.
We are installing the GitLab Patch Release: 16.7.3 to git.coop
so the service will be unavailable for a little while.
We are upgrading our hosted Discourse instances to 3.2.0.beta4: easier access to chat threads, chat mobile redesign, experimental admin sidebar, and more as a result they will be unavailable for a little while.
We are upgrading GitLab at git.coop
for the GitLab Critical Security Release: 16.7.2 update, the instance will be unavailable for a little while as a result.
We have resolved the problems with the upgrades to the two shared ONLYOFFICE servers, onlyoffice1.webarch.net
and onlyoffice2.webarch.net
and we are now upgrading all instances of Nextcloud on shared hosting to Nextcloud server 27.1.5, this will take a little while, if your Nextcloud instance has any issues tomorrow please get in touch.
We are sorry to say that yesterdays upgrade of the shared ONLYOFFICE servers ran into problems and we are still working on getting them up and running again, as a result Nextcloud instances on shared hosting won't have access to WYSIWYG document editing, all Nextcloud instances on managed virtual servers are unaffected as they have dedicated ONLYOFFICE instances.
We are upgrading our shared onlyoffice1.webarch.net
and onlyoffice2.webarch.net
servers from Debian Bullseye to Debian Bookworm and also upgrading ONLYOFFICE to version 7.5.1 as a result all Nextcloud instances on shared hosting won't have access to ONLYOFFICE during the upgrade and in addition they will need upgrading to Nextcloud version 27.1.5, this upgrade work is expected to take several hours.
We are upgrading all our hosted Mailcow servers to apply the Moocember 2023 Update | Netfilter NFTables Support and Banlist Endpoint update, this will result in services being unavailable for a little while.
We are upgrading our hosted Mailcow servers to apply the Moocember 2023 Update, this will result in them being unavailable for a little while.
We are upgrading GitLab at git.coop
to apply GitLab Security Release: 16.6.2, this will result in the service being unavailable for a little while.
We are upgrading containerd.io
to version 1.6.26 on all our servers that run Docker, this update will cause all services running in Docker containers to be restarted, this includes Mailcow and Discourse servers, the services will be unavailable as a result for around 5 minutes.
We are upgrading all Mailcow servers to apply the Moovember 2023 Update Revision A, this will result in the services being unavailable for a little while.
We are upgrading GitLab at git.coop
to GitLab Security Release: 16.6.1, as a result the service will be unavailable for a little while.
We are shutting down git.coop
for a short while to add additional disk space.
We are upgrading containerd.io
on all our servers to version 1.6.25, this will cause all services that run in Docker containers to be restarted, these include Mailcow and Discourse.
We are upgrading all the Mailcow servers we host to apply the Moovember 2023 Update | Quarantine Hotfix (Security), Rspamd 3.7.4, Synchronization Jobs, and Domain Wide Footer Fixes, this will result in Mailcow server being unavailable for a little while while the update are applied and the containers are restarted.
We are upgrading GitLab at git.coop
to version 16.6, this will result in the service being unavailable for around 10 minutes.
We are upgrading git.coop
to GitLab Patch Release: 16.5.2 so the service will be unavailable for a little while.
We are very sorry about the downtime that some sites hosted on our shared hosting servershad overnight and earlier today that was caused with a missing MySQL PHP extension, this happened when switching the source of the PHP binaries from Debian to the Sury repo and the issue has now been resolved.
We are upgrading all Discourse servers to version 3.2.0.beta3
, this release contains six security pdates, the details of which are yet to be published.
We are upgrading GitLab to GitLab Security Release: 16.5.1, as a result the service will be unavailable for a little while.
We are upgrading Docker to version 24.0.7 and this will cause all services that run in Docker containers, specifically Discourse and Mailcow to be restarted.
We are upgrading GitLab at git.coop
to GitLab 16.5, this will result in the service being unavailable for a little while.
We are upgrading all the Discourse servers we host to version 3.2.0.beta2, this will result in the sites being offline for a little while.
We are upgrading all our hosted Mailcow servers to install the Mooctober 2023 Update, this will result in the services being unavailable for a little while.
We are shutting down webarch5.co.uk
to add some additonal storage space to the servers disks, it wil be offline for a minute or two as a result.
Are our servers are up and running again, sorry for the downtime this evening.
All the virtual servers on xen4.webarch.net
are now up and running again, but we have a few that were running on another host during the reboot to move back, we anticipate that all the work should be completed by 10pm at the latest.
The xen4.webarch.nex
server is up and running again, we are now restarting all the virtual servers it hosts, they should all be up and running again before 10pm, sorry for the down time.
We are now rebooting xen4.webarch.net
, we hope to have all services up and running again as soon as possible.
We are starting to shutdown virtual servers on xen4.webarch.net
in preperation for the reboot for the security updates, many of our services will be unavailable for the next hour or so, but some, including webarch.email
will be restarted on another server with reduced RAM, they will be moved back after the upgrade.
Tonight at 9pm BST (Sunday 8th October 2023) we intend to reboot the xen4.webarch.net
server, following the Debian 11.8 updates that were applied this morning, this machines host a lot of virtual servers and these will need need to be shutdown before the reboot and then restarted after the reboot, we anticipate that this will take up to 30 minutes in total, as a result many services will be unavailable for this time.
We are updating the OpenSSH configuration on all servers to ensure that they pass a SSH policy audit, this will result in all RSA keys being updated to new ones with 4096 bits, users of OpenSSH can deploy the Webarchitects SSH fingerprints repo locally to ensure that they have current versions of all the server SSH fingerprints.
We are upgrading Mailcow servers to apply Hotfix Update September 2023 (SOGo 5.9.0), as a result the servers will be unavailable for a little while.
We are shutting down webarch2.co.uk
to add additonal disk space, it should only be offline for a little while.
We are shhutting down git.coop
in order to add additional disk space, it will be offline for around 5 minutes.
We are restarting Mailcow at webarch.email
to update the server and address an issue with Redis and Postfix, it will be unavailable for a little while, no mail will be lost as a result.
We are upgrading GitLab at git.coop
, there is not yet any information about the nature of the update, the service will be unavailable for a little while as a result.
We are upgrading git.coop
so it will be unavailable for a little while, there is no public release announcement yet.
We are upgrading GitLab at git.coop
to apply the GitLab Critical Security Release: 16.3.4, as a result the service will be unavailable for a little while.
We are upgrading containerd
on all our Docker servers to version 1.6.24, this will result in all services that run in Docker containers being restarted, specifically Discourse and Mailcow.
We are reconfiguring GitLab at git.coop
to test adding a HTTP Content Security Policy header, this will result in the site being unavailable for a little while.
We are upgrading all our hosted Discourse server to the latest version, which includes security updates, this will result in all Discourse instances being unavailable for a little while.
We have just upgraded GitLab on git.coop
to version 16.2.6, sorry for the downtime this caused.
Gitlab at git.coop
is being updated to apply the Patch Release: 16.3.2 so it will be unavailable for a little while.
We are upgrading Docker on all servers to version 24.0.6, this will result in all services that run in Docker containers being stopped and restarted.
We are upgrading git.coop
to apply GitLab Security Release: 16.3.1, this will result in the service being unavailable for a little while.
We are upgrading GitLab at git.coop
to GitLab 16.3 Release, as a result it will be unavailable for a little while.
It appears that out data centre has addressed the networking issues, please get in touch via support@webarchitects.coop if you still have any problems.
We are investigating what appears to be some networking issues with our services in Sheffield, they appear not to be accessible from some locations, the datacentre is working on addressing the issue.
We are upgrading GitLab at git.coop
to version GitLab Patch Release: 16.2.4 as a result the instance will be unavailable for a little while.
We are upgarding all the Discourse servers we host to 3.1.0.beta8, this will result in the sites being unavailable for a little while.
Sorry for the downtime this morning, the data centre has informed us that a power failure followed by a UPS failure was the cause. We believe that all our services are up and running again, please get in contact via email if you are aware of any outstanding problems.
We are very sorry for the current outage for our services in Sheffield, there appears to have been a power failure at around 2am and we are working on bringing up all the servers and services.
We are upgrading GtLab at git.coop
to GitLab Patch Release: 16.1.4, the instance will be unavailable for a little while as a result.
We are upgrading GitLab at git.coop
to apply GitLab Security Release: 16.2.2, the instance will be unavailable for a little while as a result.
We are upgrading containerd.io
on all our servers to containerd 1.6.22, this will result in all services that run in Docker containers being restarted, this includes Discourse servers. In addition we also updating Mailcow to apply the 2023-07a update at the same time, so Mailcow services will be unavailable for a little while.
All the virtual servers on xen4.webarch.net
have been restarted, sorry for the downtime.
The xen4.webarch.net
has been rebooted and we are now starting all the virtual server it hosts.
We are starting to shutdown all the virtual servers on xen4.webarch.info
in order to reboot the server to apply the [DSA 5461-1] linux security update to the host and all Debian Bullseye virtual servers that run on this machine and [DSA 5462-1] linux security update to the Debian Bookworm virtual servers that run on this machine.
We are planning on rebooting xen4.webarch.net
at 8pm BST tonight, Sunday 30th July to apply the [DSA 5461-1] linux security update to the host and all Debian Bullseye virtual servers that run on this machine and [DSA 5462-1] linux security update to the Debian Bookworm virtual servers that run on this machine. This will result in up to 30 minutes of downtime.
We are upgrading all our hosted Discourse servers to apply 3.0.6: Security and bug fix release, this will result in the sites being unavailable for a little while.
We are upgrading all the Mailcow server we host to apply the Mooly Update 2023 - Manageable CORS Settings and UI Improvements, this will result in the service being unavilable for a little while.
All the virtual servers on xen4.webarch.net
have been restarted, sorry for the downtime.
We have rebooted xen4.webarch.net
and we are now starting all the virtual servers the machine hosts.
We are starting to shutdown all the virtual servers on xen4.webarch.net
in order to reboot the server to apply the amd64-microcode security update, this will result in all the server being unavailable for a while.
We are planning to reboot xen4.webarch.net
at 8PM BST tonight, 26th July 2023, to apply the amd64-microcode security update, this will result in all the virtual servers running on this host being shutdown and restarted and as a result they all all be unavailable for a while.
We are upgrading Docker on all out servers to version 24.0.5, this will result in all services that run in Docker containers being restarted, including Mailcow and Discourse instances.
We are upgrading GitLab at git.coop
to version 16.2 and as a result it will be unavailable for a little while.
We are upgrading all our hosted Discourse servers to version 3.1.0.beta6 this will cause the instances to be unavailable for a little while.
We are upgrading Docker on all our servers to install version 24.0.4, this will cause all services that run in Dokcer containerst to be restarted including Discourse and Mailcow.
We are updating all our Mailcow servers so they will be unvailable for a little while.
We have just updated Docker on all our servers to version 24.0.3, this will have caused services that run in Docker containers to be restarted including Mailcow and Discourse.
We are upgrading GitLab at git.coop
to apply GitLab Security Release: 16.1.2, as a result the service will be unavailable for a little while.
We are upgrading all the Discourse server we host to version 3.1.0.beta5, this will result in the sites being unavailable for between 5 and 10 minutes.
We are upgrading GitLab to version 16.0.2, as a result it will be unavailable for a little while.
We are updating the Mailcow servers we host to apply a Critical Security Update, as a result they will be unavailable for a little while.
We are updating Docker on all our servers to version 24.0.2 and at the same time updating Mailcow to apply the Mooai Update 2023, this will result in webarch.email
and other Mailcow and also Discourse servers being unavailable for a little while.
In the face of a large on-going attempt to brute force email account logins to webarch.email
we have set the max number of failures before a IP address is blocked to one, we hope to be able to set this back to a reasonable number once we have blocked all the IP addresses that are being used for the attack.
We are upgrading GitLab on git.coop
to GitLab 16.0 so it will be unavailable for a little while.
Our monitoring detected two periods of connectivity failure for our Sheffield based services yesterday evening (Sunday 21st May 2023) between 17:00-18:00 and 21:00-21:30. We have heard from the data centre that the cause of this issue was 'a simultaneous failure of two AC units which caused a rapid temperature increase which in turn made the core switch hang' there are now backup portable AC units in place and the main unitis will hopefully be fixed as soon as possible.
We are upgrading Docker to version 24.0.1 and this will result in a little downtime for services that run in Docker containers.
We are taking measures to mitirgate what appears to be a DDOS against our infrastructure in Sheffield.
We are aware of connectivity issues with our infrastructure in Sheffield and are investigating the cause.
We are very sorry about the unscheduled downtime this evening but we we are happy to report that everything is now up and running again.
The xen4.webarch.net
server is back up, we are now restarting all the virtual servers, this wil take a little while.
We have rebooted xen4.webarch.net
which hosts multiple virtual server however it hasn't come up straight away so someone it on their way to the data centre to bring it back up, we are sorry to say that it might be another hour before it is online again.
We are upgrading Docker to version 24.0.0, this will result in services that run in Docker containers being restarted, specifically Mailcow and Discourse servers.
We are upgrading GitLab at git.coop
to version 15.11.3, as a result it will be unavailable for a little while.
We are upgrading Docker to version 23.0.6, as a result services that run in Docker containers, specifically Mailcow and Discourse, will be unavailable for a little while.
We are upgrading containerd
to version 1.6.21 on all our servers, this will cause Docker to be restarted and effect Mailcow and Discourse servers, they should only be unavailable for a little while.
We are upgrading GitLab at git.coop
to version 15.11.2 so it will be unavailable for a little while.
We are updating Docker to version 23.0.5, as a result Mailcow, Discourse and other services that run in Docker containers will be unavailable for a little while.
We are shutting down webarch1.co.uk
to add additional disk space, it should only be off line for a little while.
We are upgrading our hosted Mailcow servers the apply the Moopril Update 2023 - SOGo 5.8.2, Rspamd 3.5 and more, tis will result in email service that run on Mailcow servers being unavailable for a little while.
We are upgrading GitLab at git.coop
to GitLab 15.11, as a result it will be unavailable for a little while.
We are updating all the Discourse servers we host to version 3.1.0.beta4 and as a result the sites will be unavailable for a little while.
Our services are currently degraded due to a denial of service attack against one of our domain name servers, we are working with the data centre to mitigate this issue.
We are upgrading Docker to version 23.0.4, this will result in services that run in Docker containers being stopped and restarted, specifically Mailcow and Discourse.
We are upgrading git.coop
to GitLab Patch Release: 15.10.3, so it will be unavailable for a little while.
Our Mailman server at email-lists.org
is up and running again, sorry for the downtime.
We are sorry to say that our Mailman server at email-lists.org
is currently unavailable, we are working on restoring this service.
We have rolled back the Mailcow rspamd
container as suggetsed in this thread on all our hosted Mailcow servers and re-enabled rate limiting on webarch.email
, please let us know if any users have any issues that might be related to this problem.
Following this mornings Mailcow update we have recieved a report of an issue related to rate limiting for webarch.email
, for now we have disabled rate limiting as other Mailcow instances have also had an issue with this.
We are updating all our hosted Mailcow servers to apply the Moopril Update 2023 - SOGo 5.8.2, Rspamd 3.5 and more. update, this will result in them being unavailable for a little while.
We are updating Docker on all our server to version 23.0.3, this will result in services running in Docker containers being restarted.
We are upgrading containerd
on all our Docker servers to version 1.6.20 and this will result in all services that run in Docker containers being stopped and restarted so they won't be available for a little while, this includes mailcow and Discourse servers.
We are upgrading git.coop
to apply GitLab Security Release: 15.10.1 so the site will be unavailable for a little while.
We are upgrading Docker on all our servers to version 23.0.2, this will result in all services that run in Docker containers being restarted, including Mailcow and Discaouse, so these services will be unavailable for a little while.
We are upgrading PostgreSQL on git.coop
from versio 12 to version 13 and as a result the site will be off-line for a little while.
We are upgrading containerd.io
on all our Docker servers, this will result in services that run in Docker containers, such as Mailcow and Discourse being restarted.
We are updating GitLab at git.coop
to version 15.10, this will result in it being unavilable for a little while.
We are upgrading all our hosted Discourse instances to version 3.1.0.beta3, this will result in the sites being unavailable for around 10 minutes.
We need to add additional disk space to webarch3.co.uk
, this will require the server to be shutdown for a little while.
We are upgrading git.coop
to apply GitLab Patch Release: 15.9.3, this will result in the service being unavailable for a little while.
We are upgrading all our Mailcow server for a security update and it addition this update will resulted in the user interface being updated.
We are upgrading Docker to version 23.0.1, this will cause services which run in Docker containers to be unavailable for several minutes, this includes Mailcow and Discourse services.
We are updating git.coop
to apply a Critical Security Release: 15.8.2, this will result in the service being unavailable for a while.
We are upgrading Docker on all our servers to version 23.0.0, this will result in all servers that run services in Docker containers being unavailable for around 5 minutes, specifically Mailcow and Discourse servers.
We have rolled back the ONLYOFFICE servers to snapshots taken prior to the 2023-01-31 upgrade to version 7.3 in order to restore the service.
We are very sorry that downgrading ONLYOFICE has not solved the fact that document won't load in Nextcloud, we will try to fix this ASAP however it might take sometime to solve this problem.
We are sorry that a recent ONLYOFFICE upgrade to ONLYOFFICE Docs v7.3 has broken the WYSIWYG editor for Nextcloud, we are looking at downgrading to version 7.2
while searching for a solution to this issue.
There is an update available for containerd.io
that we are applying, this will result in all services that run in Docker containers being restarted, this includes all Mailcow and all Discourse servers.
We are shutting down webarch5.co.uk
to extend a disk, this will result it in being offline for a minute or so.
We are upgrdading all our hosted Discourse server to 3.1.0.beta2: Security fixes, new API scopes and more, this will result in them being unavailable for around 5 minutes.
We are upgrading GitLab at git.coop
to version 15.8 and as a result the service will be unavailable for a little while.
We are upgrading Docker to version 20.10.23 on all our servers that sun services using Docker, this will result in around 5 minutes of downtime for Mailcow and Discourse servers.
We are upgrading GitLab on git.coop
to allpy GitLab Critical Security Release: 15.7.5, this instance will be unavailable for a little while as a result.
We are upgrading all our Discourse servers to Discourse Version 3.1, this will result in them all being unavailable for around 10 minutes.
We are upgrading gitlab.coop
to install GitLab Patch Release: 15.7.3, this will result in the service being unavailable for around 5 minutes.
We are upgrading GitLab on git.coop
to apply GitLab Security Release: 15.7.2, this will result in around 5 minutes of downtime for this service.
We are upgrading containerd.io
, this will result in all services that are run in Docker containers being restarted, so they will be unavailable for a little while.
We are upgrading all our hosted Discourse servers to apply 3.0.0.beta16: Security release, this will result in the Discourse servers being unavailable for around 10 minutes.
WE are upgrading GitLab on git.coop
to apply GitLab Patch Release: 15.7.1 so it will be unavailable for a little while.
We are upgrading all our hosted Discourse server to version 3.0.0.beta15, this will result in them being unavailable for around 10 minutes.
The configuration error we made on webarch.email
yesterday has been rectified, this mistake has caused email to be delayed, everything should catch up in the next few hours as email that could not be delivered is retried. We are very sorry for any inconvienience caused.
We are very sorry with the issue with webarch.email
this morning, (it was not sending or recieving email), this problem has now been resolved.
We are updating our Mailcow server to install Docker 20.10.22, this will result in them being unavailable for around 5 minutes.
There is a new version of Docker available, 20.10.22
(however there are no release notes yet so we don't know if this is a security update or not). We are upgrading all the Discourse servers to use it now and we will update the Mailcow servers this evening to minimise disruption to users.
We are updating containerd.io.
on our servers with it installed, this will result in Docker being restarted so services that run in Docker containers, specifically Mailcow and Discourse, will be unavailable for a lttle while.
We are upgrading all our Mailcow server to apply the Moovember Update 2022 - Sogo 5.8.0, Rspamd 3.4.0 and PHP 8.1 Update | Revision B, this will result in the Mailcow server we host, including webarch.email
to be unavailable for a little while.
We are upgrading the Docker package containerd.io
on all our server that use Docker, this will cause services that run in Docker containers to be restarted, specifically Mailcow and Discourse.
We are upgrading git.coop
to GitLab Patch Release: 15.6.2, this will result in the service being unavailable for a little while.
We are upgrading GitLab at git.coop
to apply GitLab Security Release: 15.6.1 so it will be unavailable for a little while.
We are upgrading our Discourse servers to 2.9.0.beta14: Security fix, new import script, API tracking, and more, this will result in them being unavailable for a little while.
We are upgrading all our hosted Discourse servers to 2.9.0.beta13: Security fixes, sidebar improvements, new API scopes, and more, this will result in the Discourse instances not being available for a little while.
We are upgradimg GitLab at git.coop
GitLab 15.6 released with improvements to security policies, CI/CD variables, and DAST API, this will result in about 5 minutes of downtime for this service.
We are updating containerd.io
on all the servers we host that run Docker containers, this will result in services run in Docker conteners being restarted, specifically Mailcow and Dicsourse and a few others.
We are applying the GitLab Patch Release: 15.5.4 to git.coop
so it will be unavailable for a little while.
We are upgrading our hosted Discourse servers to 2.9.0.beta12: Security fix, bug fixes and more, this will result in them being unavailable for a little while.
We are upgrading git.coop
to GitLab Patch Release: 15.5.3 and this will result in the service being unavailable for a little while.
We are upgrading git.coop
to apply GitLab Security Release: 15.5.2, this will result in a little downtime.
We are upgrading all our hosted Discourse servers to version 2.9.0.beta11: Security fixes, New general category, Sidebar improvements, and more, this will result in the sites being unavailable for a little while.
Yesterday a Critical stability update was relased for Mailcow, we are applying this update now and it will result in hosted Mailcow servers being unavailable for a little while.
We are upgrading Docker on all our servers to version 20.10.21, this will result in services that run in Docker containers, specifically Mailcow and Discourse, being restarted and therefore unavailable for a little while.
We are upgrading containerd.io.
on all our servers running Docker, this wil result in services running in Docker containers being restarted, specifically Discourse and Mailcow instances. In Addition we are applying the Mailcow Mooctober Update 2022 and this will result in Mailcow servers being unavailable for a little while.
We are upgrading git.coop
to apply GitLab Patch Release: 15.5.1 , so it will be unavailable for a little while.
We are upgrading git.coop
to GitLab 15.5 so it will be unavailable to a little while.
Yesterdays issue with Matomo instances was resolved last night. We are upgrading git.coop
to apply GitLab Patch Release: 15.4.3, as a result it will be unavailable for a few minutes.
There is a issue with most of out Matomo instances following the upgrade to version 4.12.1 which we are working on solving.
We are doing some work on upgrading our shared server host2.webarch.net
so it will be unavailable for a little while.
We have solved the issue with Nextcloud loading documents via ONLYOFFICE and we are upgrading all the Nextcloud instances we host to the latest version, 24.0.6.
There is a problem with our hosted Nextcloud services connecting to the ONLYOFFICE servers, we are working on getting to the bottom of this issue but we are afraid that it might take another day or so to solve this one.
We are moving one of our ONLYOFFICE servers between pyhsical hosts and this will result in some clients using Nextcloud not having ONLYOFFICE availablity for a couple of minutes.
All our virtual servers are up and running again, once again very sorry for the downtime.
We have accidentially rebooted one of our main, virtual server hosting, physical servers and as a result most of our services are going to be off line for a while while we bring everything back up again, we are very sorry for this mistake.
We are updating git.coop
to apply the GitLab Patch Release: 15.4.2, this will cause it to be unavailable for a little while.
We are upgrading all our hosted Discourse servers to apply the latest update, 2.9.0.beta10: Sidebar, New notification menu, Security fixes, and more, this will result in the sites being unavailable for around 10 minutes as the containers are rebuilt.
We are upgrading git.coop
with GitLab Security Release: 15.4.1.
We are stopping all our Mailcow servers to apply the Mootember Update 2022 - Quarantine & Swagger UI Fix Update | Changes, this will cause them to be unavailable for a little while.
We are going to shutdown webarch.email
to add some additional disk space, this will result it in being offline for a little while.
We are updating the OpenSSH configuration on all our servers and generating SSH fingerprints for clients to use, see the Webarchitects SSH Fingerprints project.
We need to add some additional disk space to webarch7.co.uk
and this will result in it going offline for a minute or two.
We are about to update Docker on all servers to version 20.10.18 which "comes with a fix for a low-severity security issue, some minor bug fixes, and updated versions of Docker Compose, Docker Buildx, containerd
, and runcM
" and in addition we are going to upgrade all the hosted Mailcow servers to apply the Amoogust Update 2022 - The Nightly Build Switch Update (Revision B) to address issues with sending outgoing email using SMTP via SOGo that two clients have reported to us.
We are upgrading GitLab at git.coop
to GitLab Patch Release: 15.3.3, as a result GitLab will be unavailable for a little while.
We had a short outage for webarch6.co.uk
due to the Linux kernal OOM killer taking out MariaDB, we are now going to reboot the server to increase the dedicated RAM this server has access to from 26GiB to 30GiB with the aim to ensure that this doesn't happen again.
We have just resterted webarch.email
due to issues it was having, sorry for the downtime.
We are upgrading out GitLab server at git.coop
from Debial Buster to Debian Bullseye and as a result it will be unavailable for a little while over the next hour or so.
We are uograding all our hosted Mailcow servers to apply the Amoogust Update 2022, this will result in them being unavailable for a little while.
We are shutting down the GitLab server at git.coop
to add additional disk space, it will be offline for around 5 minutes as a result.
We are rebooting the shared hosting servers webarch2.co.uk
and webarch4.co.uk
to solve an issue with the syste clock on these two servers, they will be offline for a few seconds as a result.
We are upgrading containerd.io
on all our Docker servers, this will result in all services that run in Docker containers being restarted, specifically Mailcow and Discourse, there will be a little downtime for these services as a result.
We are restarting webarch2.co.uk
to address an issue with disk quotas, it will be unavailable for a short time as a result.
We are upgradig GitLab on git.coop
to version 15.3.1, so it will be unavailable for a little while.
We are shutting down the webarch2.co.uk
shared hosting server for a few moments to add additional disk space, it shouldn't take long before it is back up again.
We are upgrading all our Discourse servers to version 2.9.0.beta9 so they will be unavailable for a little while.
We are updating containerd.io
on all our servers running Docker, this will result in services, for example Discourse and Mailcow, that run via Docker, being restarted, so they will be unavailable for a moment.
We are upgrading GitLab at git.coop
to GitLab Patch Release: 15.2.2 so the service will be unavailable for a little while.
We are upgrading all our Mailcow servers to apply the Mooly Update 2022 - Revision A, this will result in the servers being unavailable for a little while.
We are upgrading all our hosted Discourse servers to version 2.9.0.beta8 so they will be unavailable for a little while.
We are upgrading git.coop
to version 15.2.1 so it will be unavailable for a little while.
We are upgrading Nextcloud to version 23.0.7 on all the Nextcloud servers we host this afternoon (not the Nextcloud instances on shared hosting) and also removing Memcache and replacing it with Redis to potentially address an issue that some servers have had. This will result in the Nextcloud instances being offline for a little while.
We are upgrading GitLab to version 15.2 so the instance at git.coop
will be unavailable for a little while.
We are rebooting webarch1.co.uk
, sites running on this server will be unavailable for a moment.
We are upgrading git.coop
to apply GitLab Patch Release: 15.1.3, so it will be unavailable for a little while.
We are upgrading all our Mailcow servers for the Mooly Update 2022 - TFA Flow Update and as a result they will be unavailable for a little while.
We are upgrading all our Discourse servers to version 2.9.0.beta7 so they will be unavailable for a little while.
Additional disk space has been added to webarch.email
and it is up and running again.
We are adding some additional disk space to webarch.email
and webarch2.co.uk
so they will be unavailable for a little while.
We are upgrading git.coop
to install GitLab Patch Release: 15.1.2 , this will result in the service being unavailable for a little while.
We are upgrading GitLab at git.coop
to version 15.1.1 due to a GitLab Critical Security Release, as a reult it will be unavailable for a little while.
We are upgrading all our Discourse servers to version 2.9.0.beta6
so they will be unavailable for a little while.
All the updates to the Mailcow servers have been completed.
We are upgrading all our Mailcow servers to apply the an important security update, this will result in a little downtime for them all.
We are upgrading git.coop
to GitLab 15.1 so it will be unavailable for a little while.
We are upgrading GitLab at git.coop
to Patch Release: 15.0.3, so it will be unavailable for a little while.
We are upgrading all our Discourse servers to version 2.9.0.beta5 so they will be unavailable for a little while.
All the updates to the Mailcow servers have been completed.
We are going to start updating all our Mailcow servers to run Docker composer version 2 and we will apply the Moone Updates from 19:30 BST tonight, this will cause around 15 to 30 minutes of downtime.
We are about to add some additional disk space to webarch1.co.uk
, this will results in a moment of downtime as we reboot the server.
We are about to add some additional disk space to webarch4.co.uk
, this will results in a moment of downtime as we reboot the server.
We are upgrading GitLab to version 15.0.2 so git.coop
will be unavailable for a little time.
We are upgrading Docker on all our servers to version 20.10.17 and this will result in a little downtime for services that depend on Docker such as Mailcow and Discourse.
We are upgrading GitLab on git.coop
for a Critical Security Update for 15.0.1, so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab at git.coop
so it will be unavailable for a little while.
The upgrades to the Mailcow servers have been completed and all instances are back online.
We are applying a Important Security Update to all the Mailcow server we host so wthey will be offline for around 10 minutes.
We are upgrading Docker to version 20.10.16 on all our servers so services that depend on Docker will be unavailable for a little while, in addition we are upgrading Mailcow and applying the Mooay 2022 Update.
We are upgrading GitLab at git.coop
to GitLab Patch Release: 14.10.2, so the server will be unavailable for a little while.
We are upgrading Docker on all our servers to Version 20.10, this will result in a little downtime for services such as Discourse and Mailcow which run in Docker containers.
We are upgrading GitLab on git.coop
, for more details see GitLab Security Release: 14.10.1, 14.9.4, and 14.8.6, the service will be offline for around 5 minutues while we apply the update.
We are upgrading GitLab at git.coop
to version
We have completed the Mailcow upgrades and the servers are back up and running.
We are upgrading all our Mailcow servers, including webarch.email to apply the Moopril 2022 update, this will result in them being unavailable for a little while.
We have completed the upgrade to webarch6.co.uk
and it is back up and running.
We are adding disk space to webarch6.co.uk
so sites hosted on this server will be unavailable for a little while.
We are upgrading all the Discourse servers we host from version 2.9.0.beta3
to version 2.9.0.beta4
and as a result they all all be unavailable for a while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab at git.coop
so it will be offline for a little while.
We are upgrading Docker on all our servers, to address a security issue, CVE-2022-24769, this means that service run in Docker, specifically Mailcow and Discourse will be unavailable for a little while.
We are also upgrading Discourse on all our Discourse servers so they will be unavailable for a little while.
We are upgrading GitLab at git.coop
so it will be unavailable for a little while.
We need to shutdown webarch1.co.uk
for a little while to add additional disk space, it won't be down for long.
We are upgrading Docker on all our servers, this will result in a little downtime for Mailcow and Discourse servers, in addition we are rebooting webarch6.co.uk
and webarch7.co.uk
to add additional RAM.
We are checking and adjusting the memory allocations for MariaDB (MySQL) on all our shared hosting servers and adding more physical RAM as needs be, this will require servers to be rebooted and several MariaDB restarts, this will all be quick so there will be hardly any interruption to services.
The MariaDB (MySQL) service on webarch7.co.uk
has been restored, we are very sorry for the downtime.
The MySQL service on webarch7.co.uk
crashed last night and we are working on recovering the service, very sorry about the downtime.
We are upgrading containerd.io
on all our servers which means that Docker will be restarted, this means that Mailcow and Discourse servers will be restarted. We are also going to upgrade all our Mailcow servers to install the Moorch Update.
We are upgrading GitLab at git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading all our Discourse servers so they will be unavailable for a little while.
We are restarting the Docker containers on webarch.email
to changes some settings, this will result in a minute or two of downtime.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are adding some disk space and RAM to host2.webarch.net
so sites hosted on this server will be unavailable for a little while.
We are shutting down webarch4.co.uk
to add additional disk space, it shouldn't take long before it is back up again.
We are upgrading all out hosted Discourse servers so they wil be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are adding some additinal disk space to webarch6.co.uk
so it will be offline for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are extending a disk on webarch7.co.uk
so it will be unavailable for a little while.
We are upgrading Discourse on all our servers so Discourse sites we host will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Discourse on all our hosted servers so forums will be unavailable for a little while.
We are upgrading Mailcow on all our servers so services such as webarch.email
will be unavalable for a little while.
We are upgrading Docker on all our servers, this will result in a little downtime for services that run in Docker containers, specifically Discourse and Mailcow servers.
We are updating all our Mailcow servers, including webarch.email
so they will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Discourse for all our hosted servers so sites will be unavailable for a little while.
We are upgrading GitLab at git.coop
so it will be unavailable for a little while.
Please note that the last GitLab upgrade for git.coop has resulted in a situation where all git pull
and git commit
requests will print a message like "time="2021-11-23T11:27:48Z" level=info msg="SSL_CERT_DIR is configured" ssl_cert_dir=/opt/gitlab/embedded/ssl/certs/
", see this issue, hopefully it will be resolved with another upgrade in the near future.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Docker on all our servers, this will cause Docker to restart so will cause a little downtime for Mailcow and Discourse servers.
We are upgrading containerd.io
on all our servers, this will cause Docker to restart so will cause a little downtime for Mailcow and Discourse servers.
We are upgrading Discourse on all our hosted instances so they will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a willtle while.
We are upgrading Docker on all our servers, this will result in a little downtime for Mailcow and Discourse servers.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Discourse on all our servers so all hosted Discourse sites will be unavailable for a little while.
We are upgrading Docker on all our servers so services that depend on it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be offline for a little while.
We are updating all servers that use Docker (Mailcow and Discourse) so they will have a little downtime.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are updating our Mailcow servers so webarch.email
and others will be unavailable for a little while.
We are shutting down webarch1.co.uk
for a little while so we can extend one of it's disks, we don't expect it to be offline for more that a few minutes.
We are very sorry but due to a misconfiguraton error, which we are currently fixing, sites on shared hosting on an account with a MediaWiki site have had a little downtime.
We are updating all our Mailcow servers to apply a fix for a memory leak in Rspamd, this means that services such as webarch.email
will be unavailable for a little while.
We are updating all our hosted Discourse servers to version 2.8.0.beta5 so they will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
Maintenance has finished on host xen4.webarch.net and the affected services on webarch.email
, runner.git.coop
, cloud.leedsbread.coop
, holyoake.webarch.coop
, sharenergy.webarch.net
, host3.webarch.net
, host2.webarch.net
, webarch4.co.uk
, webarch7.co.uk
, webarch1.co.uk
, webarch2.co.uk
, webarch3.co.uk
, webarch6.co.uk
, webarch5.co.uk
, git.coop
and animorph.webarch.coop
should be functioning again.
Essential network maintenance will be starting at 21:00 GMT (22:00 UK BST). Affected hosts are animorph.webarch.coop
, cloud.leedsbread.coop
, git.coop
, holyoake.webarch.coop
, host2.webarch.net
, host3.webarch.net
, runner.git.coop
, sharenergy.webarch.net
, webarch1.co.uk
, webarch2.co.uk
, webarch3.co.uk
, webarch4.co.uk
, webarch5.co.uk
, webarch6.co.uk
, webarch7.co.uk
and webarch.email
We are upgrading our Discourse servers to Debian Bullseye this afternoon so there will be a little downtime when they are all rebooted.
We are upgrading the Mailcow servers so webarch.email
and others will be unavailable for a little while.
The upgrade to the shared hosting servers, webarch1.co.uk
, webarch2.co.uk
, webarch3.co.uk
, webarch4.co.uk
, webarch5.co.uk
, webarch6.co.uk
and webarch7.co.uk
are more-or-less complete, we are now going to migrate the Gitlab server at git.coop
to a different set of disks and as a result it will be unavailable for a little while.
We are upgrading our main shared hosting servers from Debian Buster to Debian Bullseye, webarch1.co.uk
, webarch2.co.uk
, webarch3.co.uk
, webarch4.co.uk
, webarch5.co.uk
, webarch6.co.uk
and webarch7.co.uk
so there will be around an hour of downtime for sites hosted on these servers at some point this evening.
The upgrades for host2.webarch.net
and host3.webarch.net
have completed but we are still running some tasks to check all the configurations, if you have any issues with sites on either of these servers please email support@webarchitects.coop and we will address any problems tomorrow morning.
The upgrade of host3.webarch.net
to Debian Bullseye is mostly completed, hosted sites should be working again, there are still some additional tasks to run and the sites on host2.webarch.net
should be up again fairly soon.
We are upgrading two shared hosting servers, host2.webarch.net
and host3.webarch.net
to Debian Bullseye this evening starting at 8pm BST, sites on these servers will be unavailable for a while during the upgrade process.
We are upgrading all our Mailcow servers so webarch.email
will be unavailable for a little while.
we are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading our Mailcow servers so webarch.email
and other servers will be unavailable for a little while.
We are upgrading Docker on all our servers and this will result in a little downtime for services that run in Docker containers, specifically Mailcow and Discourse servers.
We are going to shutdown webarch6.co.uk
to add some additional disk space to the server, it shouldn't be down for long.
We need to expand the disks on several shared hosting servers and in order to do this several servers will be shutdown for a little while.
We are upgrading Docker on all our servers so services that depend on it, like Mailcow and Discourse, will be restarted.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Discourse on all our servers so they will be unavailable for a little while.
We are upgradng GitLab on git.coop
so it will be unavailable for a little while.
Due to some issues with the Execution of Sieve filters on webarch.email
we are going to restart the server, this will result ina little downtime.
We are upgrading Docker on all our servers, this will result in a little downtime for servers that use Docker, specifically Discourse and Mailcow servers.
We are upgrading Discourse on all our servers, this means that Discourse sites will be unavailable for a little while.
We are going to shutdown webarch6.co.uk
for a little time so we can extend one of the servers disks, so sites hosted on this server will be unavailable for a little while.
We are upgrading our Mailcow servers so webarch.email
and other servers will be unavailable for a little while this evening.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab for git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Discourse on all our hosted servers so these forums will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrdaing Docker on all our servers that run it, this will cause a little downtime for servics such as Mailcow on webarch.email
and Discorse,
We are upgradeing GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Docker across all our servers and this will result in a little downtime for services that depend on Docker, for example Mailcow on webarch.email
and all our Discourse servers.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Discourse on all our servers so there will be a little downtime for servers running Discourse.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
All the services have been restored on older servers while we look into the issue with the new server. We believe that everything is running without issues but there wil be a email backlog for a while, if you find any problems please contact us.
We have brought up most servers effected by the issue with the connection to the disk array on other, older front facing servers, there are three servers still down, these include git.coop
and webarch.email
, we hope to be able to bring these back up soon.
We appear to have a problem with the mounting of the storage array on our latest front facing server so we are going to restart it, this should solve the issue.
We are upgrading all out hosted Discourse servers so they will be unavailable for a little while.
We are upgrading all the Discourse servers we host so they will be unavailable for a little while.
The webarch.email
server is up and running again, very sorry about the downtime this morning.
We believe that all services have been restored.
We are sorry that our new front facing servers connection to the new disk array failed overnight, this resulted in the virtual servers it is running having errors / being out of action, we are restarting everything now.
We are updating adding some additional resources to the webarch.email
server and this requires a restart so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be offline for a little while.
We are moving some servers to new hardware, this will result in a little downtime for each server as we power it down on the old computer and then bring it up again on the new one.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Mailcow on webarch.email
so it will be unavailable for a little while.
We are upgrading Discourse on all our hosted servers so they will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Docker across all our servers and this will result in a little downtime for services that run via Docker, specifically Mailcow and Discourse.
We are upgrading all the Discourse servers we host so they will be unavailabe for a little while.
We are upgrading GitLab on git.coop
, so it will be unavailable for a little while.
We are upgrading Mailcow at webarch.email
, this will result in the service being unavailable for a little while.
The upgrade for GitLab at git.coop
has been completed.
We are shutting down Gitlab at git.coop
in order to extend one of the server disks, this upgrade shouldn't take long.
We are upgrading Mailcow on webarch.email
so it will be unavailable for a little while.
We have had the following response from Cloudmark: 'I have reset the reputation of your IP, so you should see delivery improve shortly. Please note that updates do not occur instantly but should generally happen within an hour of receiving this response.'
We are aware of email delivery issues to servers that use Cloudmark Sender Intelligence, we don't know what the reason for the CSI RBL listing is but we have requested removal.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We need to shutdown git.coop
and
We are upgrading Discourse on all our servers so there will be a little downtime for these services.
There is a new containerd.io
Docker package available and this means that we will need to restart Docker across all our servers, so Mailcow on webarch.email
and all our Discourse servers will be unavailable while this package is upgraded.
We are upgrading a Docker package on all our servers and this will result in a little downtime for all services that run on Docker.
We are upgrading GitLab on git.coop
so it will be offline for a little while.
We are updating Docker on all our servers so services that depend on it, Mailcow on webarch.email
and all the Discourse servers will be unavailable for a little while.
The webarch4.co.uk
server is back up.
We are shutting down the webarch4.co.uk
shared hosting server to extend one of it's disks, it shouldn't be offline for long.
We are upgrading Docker on all our servers and this will result in a little downtime for services that run on Docker, specifically Mailcow and Discourse.
We are upgrading GitLab on git.coop
so it will be offline for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Discourse on all our servers and this will result in them being offline for a little while.
Migrating git.coop
to our new SSD file array is taking a little longer than expected, the biggest 200G disk in 65% done, we hope the transfer will be completed sometime after 6pm.
We are going to shutdown GitLab at git.coop
in order to migrate it to the new SSD based storage array, it might be offiline for around an hour or so as a result, but will be faster when it is back up again.
We are shutting down GitLab at git.coop
to grow the partition containing the Docker registry, it should only be offline for a little while.
We are upgrading Mailcow on webarch.email
so it will be unavailable for a little while.
The GitLab runner server, runner.git.coop
is now up and running again.
We are migrating the GitLab runner server, runner.git.coop
from our old DHH storage array to the new SSD storage array so it will be offline for around another half hour.
We are shutting down runner.git.coop
to add additional space to the disks.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We need to shutdown host2.webarch.net
for a little time to expand a disk, it shouldn't be offline for long.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgradinmg GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Docker on all our servers and this will cause a little downtime for services such as Mailcow on
We are upgrading Discourse on all the managed Discourse servers we host so they will be unavailable for a little while.
We have discovered an issue with our anti-spam gateway, mx.webarch.net
this appears to have resulted in some email being delayed or returned, we are now adding some additional capacity to the server and it should be fully functional again shortly.
We are upgrading Mailcow on webarch.email
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgarding all our managed Discourse servers so they will be unavailable for a little while.
The upgrade of GitLab on git.coop
has completed and the servers is up and running as normal, sorry for the downtime.
We are extending one of the disks on git.coop
prior to the upgrade of GitLab, we expect it will be unavailable for another 10 minutes or so.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
Over the new few hours we are upgrading all Discourse servers to use PostgreSQL 13 and this migration process will result in a little downtime for each server.
We are upgrading Docker on all our servers, this will result in a little downtime for services that depend on it like Discourse and Mailcow on webarch.email
We are shutting down webarch4.co.uk
to enable a disk to be extended, this shouldn't take long.
We are upgrading GitLab on git.coop
, it will be offline for a little while.
We are upgrading GitLab at git.coop
so it will be unavailable for a little while.
We are restarting webarch.email
, sorry for te time it took to move the server and the inconvience caused.
50% of the main 1TiB webarch.email
disk has now been copied, we anticipate that the mail service will be operational again at some point on Sunday morning, once again we are sorry about this downtime and the fact that it is lasting longer than expected.
We are afraid that copying the webarch.emal
server from our old disk array to the new one is taking longer than expected, 33% of the main 1TB disk has been copied to the new server, at this rate we don't anticipate that the server will be back online until Sunday morning, we are very sorry for the inconvience this will cause people today.
The webarch.email
file systems have been checked and we have started copying the disks to the new SSD based file server.
The webarch.email
server is now shutdown and we are running checks on the disk image prior to copying them to the new file server.
We are shutting down webarch.email
at 23:00 UTC in order to migrate it to a new server, due to the size of the server we anticipate that this will take around 7 hours. No email should be lost, other mail servers should queue the email for delivery when the server is up and running again.
We are upgrading Docker on all our servers, this will result in a little downtime for services that run in Docker containers, specifically Mailcow on webarch.email
and all Discourse servers.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Docker on a few servers that were not updated this morning.
The issue with the EHLO/HELO
on webarch.email
has been resolved and we believe that delayed emails will now start to get through.
There is an issue with webarch.email
related to this mornings Docker upgrade, it is currently not using the correct EHLO/HELO
and this is resulting is some outgoing email being delayed, we are working with the Mailcow developer to resolve this problem ASAP.
The issue with the certificate for mail.webarch.email
has been resolved, sorry for any problems this caused.
We are sorry about the certificate issues with mail.webarch.email
, we are sorting that out now.
We are upgrading Docker across all our servers and this will cause a little downtime for services that depends on iy like webarch.email
and Discourse servers.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Docker, Discourse and Mailcow over the next hour so services that run using these projects will have a little downtime.
We are upgrading GitLab on git.coop
, it will be unavailable for a little while.
We are upgrading GitLab on
We are upgrading Mailcow on webarch.email so it will be unavailable for a little while.
We are upgrading our managed Discourse servers, so they will all be offline for a little while.
The planned upgrade to latest version of mailcow completed successfully. Kate
webarch.email is being upgraded on advice from our support
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
It was the GitLab registry partition that was full on git.coop
, it is now being doubled in size from 20GiB to 40GiB — this reflects how much Docker usage is happening!
We are shutting down our GitLab server at git.coop
to add additional disk space to it, it should be up again soon.
We are restarting the GitLab service on git.coop
to see if that resolves the issues the service has been having today.
All the virtual servers are up and runnning again, very sorry for the downtime.
We have powered the server that went down and are now starting to bring up all the virtual servers it hosts.
One of our front facing virtual server hosts has gone down and we are arranging to visit the data centre now to investigate the problem.
We are restaring webarch2.co.uk
so sites hosted on this server will be unavalable for a little while.
We are upgrading GitLab on git.coop
, it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading all our hosted Discourse servers so they will be unavailable for a little while.
We are are upgrading GitLab on git.coop
so it will be unavalable for a little while.
We are upgrading Mailcow on webarch.email
so it will be unavilable for a little while.
We are upgrading Mailcow on webarch.email
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
and all our Discourse servers so these services will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
and also all the Discourse servers we host so there will be a little downtime for these services.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Mailcow on webarch.email
to resolve an issue with SOGo autoreply messages, as a result the server will be unavailable for a little while.
The upgrade of webarch.email
has completed, sorry for the downtime.
We are sorry about the on-going outage of webarch.email
, we hope to have it up and running again soon.
We are upgrading Docker on all our servers, this will results in services that are run in Docker containers restarting and they will be unavailable for a little while, this includes
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgading GitLab on git.coop
so it will be unavailable for a little while.
We are uogading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
and also Mailcow on webarch.email
so both services will be unavailable for a little while.
We are updating GitLab at git.coop
, so it will be unavailable for a little while.
We are currently upgrading all the Discourse servers we host, they will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be offline for a little while.
We are upgrading GitLab on git.coop
so it will be offline for a little while.
We are upgrading GitLab on git.coop
so it will be offline for a little while.
We have completed the move to the new file server for webarch7.co.uk
.
We have completed the move for webarch6.co.uk
and it is up and running again. We are now shutting down webarch7.co.uk
to move it to the new file server.
We are moving webarch6.co.uk
to our new file server so it will be offline for a while from 11pm BST.
We are shutting host3.webarch.net
down for a short time to extend a disk.
The upgrade and reconfiguration of webarch4.co.uk
has completed.
The upgrade and reconfiguration of webarch5.co.uk
has completed.
The upgrade and reconfiguration of webarch2.co.uk
has completed.
All four servers we are upgrading this weekend are now running the latest stable version of Debian, Buster, we are now in the process of reconfiguring all the user accounts on the servers and regenerating the Apache and PHP config for the switch from mod_php to php-fpm.
We are starting a planned upgrade on webarch5.co.uk
so it will be unavailable for a while.
We are starting a planned upgrade on webarch4.co.uk
so it will be unavailable for a while.
We are starting a planned upgrade on webarch3.co.uk
so it will be unavailable for a while.
We are starting a planned upgrade on webarch2.co.uk
so it will be unavailable for a while.
We are currently having an issue restarting MySQL on webarch4.co.uk
, we hope to have this resolved soon.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We have completed the upgrade work on host3.webarch.net
and webarch1.co.uk
.
We are currently configuring users accounts on host3.webarch.net
and upgrading webarch1.co.uk
.
The upgrade work on host2.webarch.net
has been completed.
We are upgrading GitLab on
We are shutting down webarch1.co.uk
in order to copy it to our new file server, once it has been copied we will be upgrading it, so it will probably be unavailable for some time.
We are shutting down host3.webarch.net
in order to copy it to our new file server, once it has been copied we will be upgrading it, so it will probably be unavailable for some time.
We are shutting down host2.webarch.net
in order to copy it to our new file server, once it has been copied we will be upgrading it, so it will probably be unavailable for some time.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading Mailcow on webarch.email
so email services be unavailable for a little while.
We are upgrading GitLab on git.coop
and as a result it will be unavailable for a little while.
The issues with our old disk array this morning have been resolved. We have a new SSD based disk array in place and shall be migrating service to it out of office hours over the next few weeks.
We are upgrading Mailcow on webarch.email
as it won't take much longer than a restart and there have been a few users who have has issues that appear to be related to the issues with the disk array earlire. It will be offline for a little while.
There is currently degraded performance on our old disk array and this is effecting most of our Sheffield based systems, we shall be migrating services to our new disk array as quickly as possible but we are afraid that this situation is probably going to take some time to fully resolve.
The members discourse forum and nextcloud are available again
The members discourse forum and office nextcloud is offline for maintenance
We are upgradeing GitLab on git.coop
, it will be unavailable for a little while.
Scheduled maintenance this evening (20:00-24:00) is cancelled
The migration of git.coop
back to the old disk array has completed.
We have having to migrate the GitLab server at git.coop
back to our old storage array as we have found that the new array needs rebuilding before it can be used in production and we are sorry to say that we anticipate that this is going to result in the git.coop
server being offline for a little over two hours.
git.coop will be offline between 20:00 - 24:00 06/07/2020 for maintenance
The migration and upgrade of git.coop
has completed, sorry it took so long.
The git.coop
GitLab server will be unavailable for a little while as we are upgrading it and migrating it to the new storage array.
We are upgrading Mailcow on webarch.email
so it will be unavailable for a little while.
We are upgrading Mailcow on webarch.email
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
, it will be unavailable for a little while.
We are in the process of upgrading Docker on all our server and this will result in services that depend on Docker being upavailable for a little while, sorry for any problems caused by this.
We will be stopping and restarting webarch.email
to allow Docker to be upgraded, this will result in a little downtime.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
Sorry for the disruption to the webarch.email
service this morning, it has now been updadet and allocated additional resources.
We are having some issues with webarch.email
and as a result we are going to upgrade and restart it, it will be unavailable for a little while.
We are going to be shutting down webarch1.co.uk
, webarch2.co.uk
, webarch4.co.uk
and webarch5.co.uk
for short periods this evening to extend the the size of their disks.
We are upgrading all our hosted Discourse servers, they will be unavailble for a little while.
We are upgrading GitLab on git.coop
, it will be unavailable for a little while.
We are upgrading Mailcow on webarch.email
and extending a disk on webarch3.co.uk
so these services will be unavilable for a little while.
We are upgrading GitLab on git.coop
.
We are upgrading GitLab on git.coop
.
We are upgrading Docker on our servers, this will result in services such as webarch.email
being unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be down for a little while.
We are upgrading Docker on all our servers, this is going to result in a little downtime for everything that depends on it including webarch.email
.
We are upgrading GitLab on git.coop
, it will be unavailable for a little while.
We are manually rebuilding the Discourse servers we host to upgrade PostgreSQL, so there will be a little downtime this evening.
We are uugrading GitLab on git.coop
so it will be unavailable for a little while.
A new version of Docker has been released so we are going to upgrade it across all our servers, this will cause a little downtime for services that run on Docker, including webarch.email
.
We are upgrading GitLab on git.coop
to version 12.10.6.
This mornings upgrades have completed, in a little while webarch.email
should have fully restarted, all other services should already be up and running without issues.
We are about to update the Docker containerd.io
package on all our servers, this will cause all the services running via Docker to be restarted so there will be some downtime for some things like webarch.email
for a little while.
We are about to upgrade git.coop
so it will be down for a little while.
We are upgrading GitLab on git.coop
.
We are upgrading GitLab on git.coop
.
We are sorry that the Apache web server stopped running on webarch3.co.uk
just before 1:20am BST this morning, we haven't found the cause of this but the service was restored at 9:10am.
We are rebooting webarch6.co.uk
to add additional CPUs to the server, it will be unavailable for a little while.
We are upgrading GitLab on git.coop
, it will be unavailable for a little while.
We are upgrading webarch.email
, it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be offline to a little time.
We are going to be upgrading webarch.email
shortly and this will result in it being unavailable for a little while.
We are upgrading GitLab on git.coop
, it will be unavailable for a little while.
We are upgrading GitLab on git.coop
, it will be unavailable for a little while.
We are going to upgrade webarch.email
so it will be unavailable for a little while from around 11:00 UTC.
The webarch6.co.uk
server is up now, we are very sorry for the downtime of services today.
We believe that all our servers are now back up apart from webarch6.co.uk
, we are still working on this one. Please contact us at support@webarchitects.coop if you are having an issue with a service not on webarch6.co.uk
.
We are having issues with MariaDB InnoDB recovery on several shared hosting servers.
We are now leaving the data centre, we will bring up the remaining servers remotly.
We have started bringing up the virtual servers, it will be a little while before they are all running again.
There appears to be no hardware problem with the file server, we are now decrypting the disks and bringing it back up, once this has been done we will be able to mount the filesystem from the front facing servers and then we will be able to start bringing all the virtual servers back up, this is going to probably take a couple of hours in total, very sorry for the inconvienience caused.
Our main file server is still booting.
Our main file server was not responding on the console so we are power cycling it, it is booting now. We will need to restart all the virtual servers after the file server is brought back up.
We are outside our Sheffield data centre waiting for a member of staff to let us in.
Our main file server had gone down and we are on our way to the data centre to investigate the cause.
We are having some issues with our infrastructure in Sheffield and are investigating the cause.
We are updating GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be unavailable for a little while.
We are upgrading GitLab on git.coop
so it will be down for a little while.
We are aware of rate limiting of email from webarch.email
to Gmail and Outlook, we believe that this has been triggered by a client who's email account was compromised and then used to send a large volume of spam. Please ensure that all the devices that you use to access your email accounts are secure and that all your email passwords are strong ones. We expect the rate limiting to be lifted in time.
We are shutting down webarch.email
to add additional disk space, it will be unavailable for a little while.
We are upgrading GitLab so git.coop
will be unavailable for a little while.
We are upgrading Docker on all our servers, this will case a little downtime for servers that use Docker like webarch.email
.
We are upgrading GitLab on git.coop
, it will be unavailable for a little while.
We are upgrade Docker on all our servers, this will result in a little downtime on webarch.email
.
We are updating Docker on all our servers, this will result in a little downtime on servers that use Docker, for example webarch.email
.
We are upgrading webarch.email
so it will be unavailable for a little while.
We are rebooting host3.webarch.net
and it is currently doing a file system check, it should be back online soon, sorry for any inconvenience caused.
Our upstream bandwidth provider had put mitigations in place to prevent the NTP reflection attack from disrupting our services, we hope this this is the end of the matter, however because we still have no idea what the reason for the attack was we have no idea if there is more to come, once again we are very sorry for any inconvenience caused.
It looks like it is a NTP reflection attack against webarch4.co.uk
, we have no idea of the reason for it, our upstream bandwidth provider is working on mitigating the effect.
Our Sheffield network is suffering from a DDOS, we are working with our upstream partners to investigate the cause of this and to see what mitigations can be put in place. We are very sorry for any inconvenience caused.
We need to quickly reboot webarch6.co.uk
, sorry that it would be available for a few minutes.
We believe that all systems are up and running now. There was a power outage for the whole area that our data centre is in and then the UPS for our rack (which is due to have replacement batteries by the end of the month) didn't manage to provide power for quite long enough to allow the backup generator to start up and therefore all our servers lost power. Once power was restored we needed to do a filesystem check on the router before decrypting and bringing up the storage array and once that was up we were able to bring up all the virtual servers, sorry that this all took a while, this is the first time in well over a decade that we have had an issue like this.
We hope to have all services restored by 2pm.
We are in the process of bringing services back up.
There has been a power failure at our Sheffield datacentre and this has caused all our Sheffield services to go down, we are on our way to the datacentre now.
The update to webarch.email
has completed, sorry for any inconvenience caused.
We are updating webarch.email
, it will be unavailable for a little while.
We are upgrading webarch.email
so it will be unavailable for a little while.
There are some issues with the git.coop
GitLab web interface today, we are unsure of the cause, however we are going to upgrade the server as this has been needed for a while, this wioll result in a little downtime during the day.
webarch1.co.uk
is up and running again as normal, sorry for any inconvenience caused.
We are shutting down webarch1.co.uk
for a little while while we add additional disk space to it.
We are restarting the webarch.email
server, it will be unavailable for a few minutes.
We are doing a quick restart of the mail.webarch.email
services to enable a configuration change to be enabled.
The upgrade of mail.webarch.email
has been completed, sorry for any inconvenience caused during the downtime.
We are upgading mail.webarch.email
, it will be unavailable for a little while.
We intend to upgrade mail.webarch.email
from 8pm this evening and expect that the server will be unavailable for a little while while this work is undertaken no email should be lost in this process but some might be delayed a little.
We have just generated a new TLS certificate for mail1.ecohost.coop
so the problems when connecting to the mail server using the old Ecohost settings should now be resolved. However it would be better if people did update their email clients to use the webarch.email settings
We are very sorry for the inconvenience caused by the lapsing of the ecohost.coop
domain name, this has now been renewed.
Sorry for the interruptions in service between 10pm and 10:50pm BST, the new router is now in place and we hope that services will be better than ever as a result.
We are installing a new router at our data centre in Sheffield, all Sheffield based services will be unavailable for a little while.
We are planning on replacing the temporary router we have in our Sheffield data centre with a new one from 10pm BST tonight, this will result in a little downtime but should solve the limited incoming bandwidth issues we currently have.
We believe that the issues with webarch.email
this morning have now been fully resolved and that no email was lost (some will have been delayed), once again sorry for the downtime.
We are still having some problems with webarch.email
, we hope to have them resolved soon.
We have resolved the issue with the Mailcow upgrade and webarch.email is up and running again, sorry for the downtime.
We are currently upgrading Docker on webarch.email
and are afraid that this is taking longer than usual.
We are very sorry about the connectivity issues we have had today with all our services hosted in Sheffield, this was due to an inadequate temporary replacement router, we now have a second replacement router one in place and are currently doing some testing on it, we hope it will sufficient as a temporary solution while we source and provision a pair of new replacements.
We have just done a very a quick reboot of the replacement router, it was only down for a moment, we do not anticipate any further downtime, once again we are very sorry about the problems this morning.
We are doing a quick reboot of the replacement router, it should be back up in a moment.
Our sheffield services are restored, we are very sorry for the downtime and the inconvenience caused.
We are in the process of configuring a replacement router.
We hope to have all services restored by 10am BST.
All our Sheffield services are unavailable following a reboot of our main router, which failed to come back up, we are on our way to the data centre to resolve the problem.
We are doing some work on our router at 16:00 GMT today and we expect connectivity to all our Sheffield servers to be lost at this time for around one minute.
Sorry for the git.coop
downtime this morning, we are upgrading it and giving it additional resources (more RAM and CPUs).
We are updating GitLab at git.coop
and it will be unavailable for a little while.
The upgrade for webarch.email
has completed.
We are upgrading webarch.email
and as a result it will be unavailable for a little while.
We are upgrading Docker across multiple servers and this will result in some services being unavailable for a while.
The problem with the certificate for webarch.email
has been fixed.
The upgrade for webarch.email
is complete.
We are upgrading webarch.email
so it will be un-available for a little while.
We have booted the server and are now bringing up the virtual servers one at a time.
We are at the data centre and checking the BIOS on the sever.
We are going to the data centre to investigate the problem with the server that is down.
One of our front facing servers is currently off-line and this is effecting several of our services, we are looking into the cause of this problem.
We are upgrading Docker across all our servers and this will result in some downtime for a variety of applications, the key one being webarch.email
.
We are upgrading Docker across all our servers and this will result in some downtime for a variety of applications, the key one being webarch.email
.
We are upgrading GitLab at git.coop
so it will be off-line for a little while.
We are rebooting services due to last nights Linux kernel update.
We are upgrading out GitLab server at git.coop
so it will be unavailable for a few minutes.
We are about to apply the May updates to webarch.email
, it will therefore be unavailable for a few minutes.
We are updating the GitLab server at git.coop
and it will therefore be unavailable for a few minutes.
The update to the webarch.email
server has been completed.
We are shutting down the webarch.email
server to apply some updates, it shouldn't take long.
The webarch.email
server is up and running again, sorry for any inconvenience caused.
We are restarting the webarch.email
server to disable the Solr search index (without Solr searching emails using SOGo still works but is slower), the server will be unavailable for a little while.
The full re-scan of the email on webarch.email
has completed and the search facility in the SOGo interface is now working correctly.
We have started a full rescan of email on webarch.email
to try to solve the issues there have been with the SOGo search not returning full sets of results, this will result in the server running with a high load for some time and this could have an impact for users.
The work on webarch.email
has completed.
We are adding additional resources to the webarch.email
server and it will be down for a little while.
The problem with webarch.email
has been resolved.
We are having problems with the web interface of webarch.email
following an update, we hope to have this resolved soon, non-web based email sevices are working normally.
We appologise for the issues with our servers in Sheffield today, there has been intermittent slow disk speeds and high iowait
times, we haven't yet found the cause of this.
The upgrade to webarch.email
has completed, sorry for the downtime.
We are applying the January updates to webarch.email
and it will be unavailable for a little while.
webarch.email
is up and running again.
We are restarting webarch.email
as part of the planned migration of email accounts from mail1.ecohost.coop
, it should be up again soon.
We are very sorry to say that a client's email account on mail1.ecohost.coop
has been compromised and used to send a massive volume of spam and this has caused the server to be blacklisted, we have secured the account and are now seeking to have the server de-blacklisted.
The disk resizing has completed, webarch2.co.uk
is up and running again.
We are shutting down webarch2.co.uk
to add additional disk space, this shouldn't take long.
The upgrade to webarch.email
has completed, sorry for any inconvenience caused.
We are updating webarch.email
and it will be unavailable for a little while.
The update to webarch.email
has completed, sorry for the downtime.
We are updating webarch.email
to get the latest Mailcow updates, the server will be unavailable for a little while.
The upgrade of webarch.email
has been completed.
We are upgrading webarch.email
and it will therefore be unavailable for a little while.
Our GitLab server at git.coop
is up and running again, for more detail about the disk space issue see this issue. Sorry for any inconvenience caused.
Sorry about the downtime for git.coop
, we are currently growing the disk space.
We are upgrading webarch.email
and it will be unavailable for a little while
The upgrade of webarch.email
has completed, sorry for any inconvenience caused.
We are upgrading webarch.email
as as as result it will be unavailable for a little while.
We have just restarted the webarch.email
server to address some issues that it was having.
We have requested the removal of webarch.email
from as many anti-spam blacklists as we can, however it might be up to a week before the server is fully delisted, we are very sorry for any inconvenience caused by this.
We are sorry about the problems with webarch.email
this morning, it was due to an account being compromised and huge volumes of spam being sent.
We have upgraded webarch.email
and it is available again.
We are upgrading webarch.email
and it will be unavailable for a little while.
The update to webarch.email
has been completed and we have also updated the IP address of this status page (see this article for the background) so it soon be available again. We are sorry for any inconvenience caused.
We are about to upgrade webarch.email
and it will be unavailable for about ten minutes.
We have completed the upgrade of webarch.email
and are sorry for any inconvenience caused.
We are going to upgrade webarch.email
and it will be unavailable for a few minutes.
We have completed the maintainance on webarch.email
, sorry for any inconvenience caused.
We are updating webarch.email
, it will be unavailable for a few minutes.
We believe that webarch.email
is fully up and running again, sorry for the downtime.
We are updating and restarting the webarch.email
server and it will be unavailable for a short while as a result.
We are sorry that we are having some issues with our mail server at webarch.email
, we hope to have this resolved as soon as possible.
We have added additional RAM to webarch.email
and updated the server and it is now up and running again, sorry for any inconvenience caused.
We have had some issues with webarch.email
server this afternoon and we are going to add some additional RAM to the server and reboot it, it will be unavailable for a little while.
The webarch.email
server is up and running again.
We are uggrading and restarting the webarch.email
server, it will be unavailable for a little while.
We are about to upgrade webarch.email
it will shortly be unavailable for about 5 minutes.
We are rebuilding all our DNS servers, the work on the Sheffield ones, dns0.webarchitects.co.uk
and dns1.webarchitects.co.uk
has been completed. The European server, dns2.webarch.info
has been rebuilt and it is now based in Paris and has a new IP address, 217.70.190.127
. Our Icelandic server, dns3.webarch.info
, is in the process of being rebuilt and when this is completed it will also have a new IP address, 185.112.146.79
.
The work on webarch.email
has been completed.
We are doing some maintenance on the webarch.email
server, it will be unavailable for a few minutes.
We are restarting webarch.email, it should be back up in a moment.
The webarch.email
server has been upgraded and is available again.
We are upgrading the webarch.email
server and expect it to be unavailable for a few minutes.
We are seeing very high loads on host3.webarch.net
and we have shutdown Apache and MySQL on the server while looking into this issue. host2.webarch.net
is running a filesystem check and should be up soon.
We are restarting all the virtual servers on xen2.webarch.net
.
We have resolved the issues with the connection to the IPMI on xen2.webarch.net
and are once again shutting down all the virtual servers on the server.
We are having issues connecting to the IPMI on xen2.webarch.net
, so it hasn't yet been rebooted, so we are restarting the virtual servers on it and will shut them down again when we are ready to reboot the server, sorry for this delay.
We are shutting down all the virtual servers on xen2.webarch.net
.
We are restarting all the virtual servers on xen1.webarch.net
.
We are rebooting xen1.webarch.net
.
We are shutting down all the virtual servers on xen1.webarch.net
.
Due to the Meltdown and Spectre exploit critical vulnerabilities we are planning on rebooting all our Sheffield servers during the course of the morning of Wednesday 10th January 2018, we don't anticipate that there will be a great amount of downtime but all services will be affected for a while.
We are expecting to have to reboot all our servers at some point in the next few days, when Debian updates are available for these security issues.
The webarch.email server has been upgraded and appear to be working without issues.
We are going to upgrade webarch.email this afternoon, an email announcing this has been sent to all mailboxes on the server.
Restoration of all the virtual servers in Iceland is now looking hopeful, the lastest from @1984ehf: "24 hours ago we thought that nearly all VPS data was corrupt beyond repair. We have since found a way to restore disks from the VPSs. This means that we will be able to retrieve data from a lot of the VPS disks and we will help every one of our VPS customers to find their data.".
We are pleaase to say that mail.webarch.net
is back up again and we believe we have solved the cause of the downtime.
Sorry about the downtime for mail.webarch.net
, it should be up again soon.
We are pleased to say that mail.webarch.net
is up and running again, sorry for the downtime.
We expect the filesystem check being run on mail.webarch.net
to take several hours to complete and the server won't be up before this completes. Sorry for the downtime.
Veryr sorry about the on-going downtime for mail.webarch.net
, we hope to have it back up ASAP.
We have had an issue with mail.webarch.net
and we are currently restarting the server.
We are plased to say that webarch.email
is up now and working normally.
We are having to reboot webarch.email
sorry for the downtime, it should be back up shortly.
We believe all our systems are up and running again normally, very sorry for the downtime.
We are afraid that we have a problem with our front facing Xen servers in Sheffield and we are having to reboot them, this will result in some downtime for all our Sheffield based services.
The Sheffield connectivity issue this morning was caused by a problem with a router in London, we have spoken to our data centre and they have resolved the issue.
We are looking into connectivity issues at our Sheffield data centre.
Our Mailman server is now up and running again.
We should have the mailman server up and running very soon. We are afraid it is going to be a little longer for mail.webarch.net
and the MKDoc server.
We are copying the mail.webarch.net
disk, 90G done out of 275G, in one hour, at this rate we are afraid it is probably going to be down until around 3:30pm.
We are doing some additional work on mail.webarch.net
, it will be down for a little while.
mail.webarch.net
is back up and we have also fixed the issue with the webserver on webarch1.co.uk
. We hope to bring up the Mailman server very soon.
We are afraid it is going to be some time before mail.webarch.net
is up and running again, we are copying the disk across the network and have copied 41G out of 275G — it is going to take some time to complete.
mail1.ecohost.coop
is now up again.
The file system check on mail1.ecohost.coop
is 53% complete and might take another hour or two to complete, then that server will come up again. The server that was running several CentOS servers, cat.webarchitects.co.uk
, which had an uptime of 2017 days, had two RAM chips fail after the move, after removing them the server still won't boot, so we moved the disks into another server and used a rescue disk to repair the RAID 0 boot partition on all the disks. The server that the disks from cat
are now running in doesn't have enough RAM to run the virtual servers, so we are now copying the partitions over the network to our FreeBDS ZFS file server and when that is completed we will start them up on xen1.webarch.net
. This means that it is going to be quite a few hours before these servers are up:
mail.webarch.net
Webarchitects mail serveremail-lists.org
and several other domainsSorry for the downtime.
The Ecohost mail1.ecohost.coop
server is running a file system check and should be up soon.
We believe all our Sheffield services are up and running again apart from mail.webarch.net
, our Mailman server and our MKDoc server, we have had some hardware issues with the physical server these virtual servers run on and are trying to solve the problems.
We are starting to shutdown services in our Sheffield data centre for the room move.
We will start shutting down servers in our Sheffield data centre at 10pm BST to move them to a new room, will post updates as the work progresses, we are at the data centre now.
We believe that all services are now running normally, please contact us if you have any issues.
We have some issues with xen1.webarch.net
which runs multiple virtual servers and as a result we are in the process of migrating services to xen3.webarch.net
, very sorry for any inconvenience caused.
The mail1.ecohost.coop
server has been removed from the SORBS blacklist and should soon be removed from the Barracuda blacklist.
We have requested that mail1.ecohost.coop
is delisted from the SORBS and Barracuda email server blacklists, sorry for the inconvenience these listings might be causing.
We are aware of issues regarding mail.webarch.net
and we are investigating the cause of these problems.
One account on mail.ecohost.coop was being used to send spam, we have stopped the abuse, but there is a possibility that this will have a negative impact on other users of the server if the server is blacklisted.
Our new mailcow server, webarch.email, is now up and running again, very sorry for the downtime.
We are working on restoring webarch.email from a 2am backup from 5th July 2017.
All services, apart from webarch.email are now up and running, we are still working on bringing webarch.email back online. Sorry for the inconvenience.
We are in the process of restarting all the servers on xen1.webarch.net, very sorry for the downtime.
We are having to reboot xen1.webarch.net due to kernel issues across all the virtual servers on this machine, we are afraid this is resulting in some downtime for a large number of services.
We have had some problems with webarch.email and are having to reboot the server, it should be up again in a moment.
The planned maintainance on mail.webarch.net
and webarch.email
has now been completed and all our mail services are up and running again, sorry for any inconvenience caused.
We are about to conduct some planned maintainance on mail.webarch.net
and webarch.email
, this will involve an interuption to your email service, but no email should be lost, we will post up update when the work has been completed.
Some of our hosted sites which use Let's Encrypt HTTPS certificates are currently unavailable due to Let's Encrypt OSCP servers being down, see the Let's Encrypt Status Page
Very sorry for the downtime this morning, all systems are now up and running again.
We are now bringing the virtual servers up one at a time and each file system is being checked, it will be a little while before they are all running again, very sorry for this downtime.
We are about to reboot one of our main physical servers in Sheffield which hosts multiple virtual servers so quite a few servers will be unavailable for a while
We are currently having some issues with webarch1.co.uk
, we hope it will be available again shortly.
Sorry about the problems with web2.ecohost.coop
, the server hosts a Labour MPs site which is under a very high load today.
Very sorry for host3.webarch.net being down for just over an hour this morning, it is now up again.
host1.webarch.net and host3.webarch.net are now up and running again, very sorry for the downtime
We are rebooting host1.webarch.net and host3.webarch.net for essential maintenance.
host1.webarch.net and host3.webarch.net went offline between 1:20am and 1:30am this morning and were not available until 8:30am this morning, we sincererly appologise for any inconvenience and we are investigating the cause of the issue.
We appologise for the downtime that host1.webarch.net and host3.webarch.net have just had, the servers are now rebooting and should be up again in a moment.
We are sorry that we just had a little downtime across several shared hosting servers in Sheffield due to a large load spike, we are looking into the cause.
host3.webarch.net is back up, sorry for any inconvenience caused.
We are expanding the disk on host3.webarch.net, it should be up again soon. Sorry for the downtime.
We have resolved an issue with payments of invoices via PayPal, invoice payments via Paypal are now working correctly, sorry for any inconvenience caused.
We are very sorry to say that we have discovered that our ticketing system was not accepting tickets for 14 days over Christmas and the New Year due to a misconfiguration error. If you tried to contact us during this period you should have recieved a Undelivered Mail Returned to Sender response, please resend your original email as the ticketing system is now up and running again. We appologise for any inconvenience caused by this outage.
We are seeing some connectivity issues with servers and services in Iceland this afternoon.
The fault with our office phone line has now been resolved and our answerphone is available again when nobody is available.
There is currently an issue with our office phone line, The Phone Co-op are investigating the fault, best use email to contact us today.
Sorry so the loss of network connectivity for host3.webarch.net
between 13:57 and 14:03, we are investigating the cause of this outage.
The mail1.ecohost.coop
server has now been removed from all but one blacklist and for this one "Automatic removal will occur for IPs that are seen to be clean". We have sent an email to users of the server to explain what happened and to appologise for any inconvenience caused.
The mail queue on mail1.ecohost.coop
has now cleared but we are sorry to say that the server is now listed on some blacklists due to a compromised account on the server being used to send a large volume of spam — we are now working to have it removed from the blacklists. Sorry for any inconvenience caused by this and please ensure that all the devices you use for email are secure!
Please bear with us as the mail queue on mail1.echost.coop
clears — until then there will be some delays in outgoing emails.
The mail1.ecohost.coop
server is back up with additional disk space, once again sorry for any inconvenience caused.
Very sorry for the inconvenience but there is going to be a little downtime for mail1.ecohost.coop
as we need to add additional disk space to the server.
The mailserver mail.webarch.net
has been rebooted with an extra 2G of RAM and the MKDoc server is also back up. Sorry for any inconvenience caused by the short downtime.
We are about to reboot the Webarchitects mailserver mail.webarch.net
in order to add additional RAM to it. There will be a short downtime for the MKDoc server as we are removing some RAM from that machine as it is running fewer sites these days.
Sorry to say that there has just been another short outage to services in Iceland, between 14:16:42 and 14:18:33, we will post an update here when we know the cause of the connectivity issues today.
There has just need another short connectivity issue with Iceland based services, the network was down between 12:14 and 12:15 BST, it is back up now.
Network connectivity to servers and services running in Iceland has been restored.
We are experiencing network connectivity issues with our servers in Iceland, we are investigating the cause of this problem.
Barracuda Networks have responded to the delisting request for mail1.ecohost.coop
as follows:
"We have removed 81.95.52.108 (Please wait 24-48 hours) from our blocklist for 30 days, at which time it will be re-evaluated."
You can check on the delisting progrees a Debouncer Blacklist Check.
We believe we have discovered and blocked the abuse of services running on mail1.ecohost.coop
and we are now requesting that this server is removed from blacklists. Once again we are sorry for any inconvenience caused by this.
One of our email servers, mail1.ecohost.coop
is currently listed as having a 'poor' reputation on the Barracuda Reputation System, this is causing some email delivery issues, we are investigating the cause and when we have tracked it down we will request to be delisted. Sorry for any inconvenience caused by this.
This site is being launched today, all services and servers are up and running without issues.
We can be found in #webarch on irc.libera.chat, the Libera.Chat Internet Relay Chat (IRC) network has a web interface at web.libera.chat.
Our announcement emails lists have public archives and you can browse these at lists.webarch.co.uk.
If there is a problem with the main Webarchitects site you should be able to view a recent snapshot at the Internet Archive.
This site and the DNS servers forwebarch.info
are hosted totally independently from all
other Webarchitects server infrastructure to ensure that any outages elsewhere don't affect this site. The
DNS servers are provided by Gandi and the web hosting by
GitLab.
The code for this site and a script for updating it, can be found at GitLab. This site takes a few minutes to be regenerated after an update is committed to the status.html file, you can check if there are any outstanding updates by looking at the jobs page.