This is the status page for tapestryjournal.com, the online learning journal. It will be updated whenever we become aware of an issue with our service.
You can report an issue to us at customer.service@eyfs.info
Tapestry is up and running as normal.
You can check whether tapestryjournal.com is reachable using pingdom.
Apologies to those of you who experienced some errors this morning. They were due to some of our web servers running out of disk space. The issue has now been resolved.
The first week of September our phone provider experienced a DDoS attack that affected our ability to make and receive phone calls.
There was also a problem with securresuite.co.uk that card issuers (e.g. banks in the uk) use to verify that your card is secure. This meant that the card transactions processed through this company failed.
Scheduled maintenance: Tapestry was offline between 6 and 7 am GMT on Saturday 13 March for upgrades.
From January 5th-11th we saw an increased rate of errors on Tapestry. These were the result of a chain reaction triggered by an exceptionally high load. We experienced some issues with background tasks in the week that followed, see below for details. You can also see our information page about it for FAQs.
We saw a few bursts of errors on Tuesday 19th at 8:35am, 8:43am and 11:20am. These quickly subsided and we believe we have found the cause.
Since 5:19pm iOS app users were emailed any notifications that should have been push notifications. Any iOS users who were affected should start receiving push notifications the next time they open the app (which triggers it to re-register with us). Any iOS users who didn't receive a notification overnight should now be getting them as normal.
Some users were not receiving email notifications. We made sure any missed email notifications went out, but they may have been delayed.
Tapestry remained stable all day. Media did take slightly longer to process than usual and some document uploads and scheduled observations stalled after background processes resumed the day before, though. Our developers identified and fixed the issue. Stalled tasks were all restarted with the scheduled observations backlog clearing by about 7pm and the antivirus checks for documents finishing early Wednesday morning.
Tapestry began running slowly at 9:17am. Errors came in waves throughout the rest of the day so users may have been shown an ‘error code 500’ page (a drawing of two dogs, a server, and one of our devs), or an error 504 page (a text page) when trying to carry out actions on Tapestry. To help reduce these we turned off background processes, meaning that users didn't receiving notifications and media and PDF processing was temporarily halted. Any observations scheduled to be published were also delayed. We really do apologise for the inconvenience. We are doing everything we can to improve things for tomorrow.
Tapestry saw up to 3% error rates between 10:00am and 10:06am. As of 10:10am Tapestry was functioning with 0% errors. Immediate notifications were turned back on and background work returned to normal.
Despite increasing our capacity substantially, our servers struggled to keep up with the exceptionally high traffic this morning. This led to errors for some of our users and a backlog in background processes, meaning a delay in video and PDF processing, slower notification emails, and a delay in scheduled observations being published. We had recovered by the afternoon and worked through the backlog.
Our load on Wednesday was much higher than we have ever seen before. On some metrics it was more than double our previous busiest day. This meant that some of our systems struggled and at some points led to very high error rates. We are very sorry about that. We brought on extra capacity which stabilised the system soon after lunch. We are doing what we can to prepare for tomorrow.
One of our database servers started experiencing problems. This then caused further problems in some of other database servers, leading to error rates of above 25% and occassionally above 60%. The relevant database servers were failed over to their backups and capacity was increased so the system could catch up. We are very sorry for the inconcenience this will have caused. We are looking into the root cause of what caused the initial problem, and why it then spread.
Tapestry was be offline between 6 and 7 am GMT on Saturday 19 December for upgrades.
Tapestry was offline for some scheduled maintenance.
Between 6am to 7am BST Tapestry was offline for scheduled maintenance.
Between 12:40 and 12:51 BST one of our database servers experienced very heavy load, leading to errors for some schools. We have increased capacity.
Tapestry was offline for scheduled maintenance between 6am and 7am BST on Saturday 20 June. Please accept our apologies for the inconvenience.
Between 6am and 6.30am we applied a series of scheduled updates to our database servers. These were found to degrade the performance of Tapestry significantly and so we started to revert them. Unfortunately the servers became stuck in an error state, which meant their peformance continued to be degraded until 8.30am when backup servers were brought online. We are very sorry for the inconvenience this caused.
Tapestry experienced high load in the morning causing increased error rates. We stabilised Tapestry by upgrading our servers and are continuing to monitor the situation.
Tapestry experienced some downtime for all schools between 19:30 and 19:40 due to an issue with our database servers.
After a few months of peace, between 13:36 and 15:00 GMT we saw an increase in error rates, at the worst moments affecting one in three requests. Once again, it seemed to relate to the way that one of our database servers recovered from a moment of heavy load. We are sorry for the problems this caused. We have altered the way that the server recovers which we hope will make faults like this much rarer.
We have an archive of issues from more than 12 months ago.