Scientists in the US are set to unveil a new file system that is mathematically guaranteed to retain your data during crashes and system fails at a technology symposium later this year. This means that while bugs and power failures could still shut everything down without a moment's notice, you'll lose nothing.

"What many people worry about is building these file systems to be reliable, both when they're operating normally but also in the case of crashes, power failure, software bugs, hardware errors, what have you," one of the team, Massachusetts Institute of Technology (MIT) researcher Nickolai Zeldovich, said in a press release. They hope to address this by integrating their new file system - which is the part of your computer that writes new data to a disk and tracks where it's stored - with PC operating systems of the future.

"Making sure that the file system can recover from a crash at any point is tricky because there are so many different places that you could crash. You literally have to consider every instruction or every disk operation and think, 'Well, what if I crash now? What now? What now?'" says Zeldovich. "And so empirically, people have found lots of bugs in file systems that have to do with crash recovery, and they keep finding them, even in very well tested file systems, because it's just so hard to do."

Zeldovich and his team address this concern with their new file system by making sure that every component has been built to behave in a prescribed, reliable manner at all times, even during the event of a crash, which makes it logically impossible for something to be 'forgotten'. 

It sounds simple, but to achieve this, they had to perform an incredible amount of complex mathematics according to a process known as formal verification. This process, which Michael Rundle explains at Wired UK as, "describing the limits of operation for a computer program, and then proving the program can't break those boundaries," allowed the team to take every single component in a file system and apply a mathematical formula to it. These formulae would then prescribe how each component would relate to one another in running the file system.

Once they had figured this all out on paper, they wrote it into their file system code, and then developed a 'proof assistant' tool called Coq to help test its efficacy. They tested the rules for everything, right down to how the system defines "what is a disk?" and "what is a bit?", and Coq made sure all of it was logically consistent.

"All these paper proofs about other file systems may actually be correct, but there's no file system that we can be sure represents what the proof is about," Zeldovich said in a press release.

What they ended up with is a file system that cannot behave in any way other than how it's been prescribed, but because of this, it runs much slower than the file system in any computer you'd buy today. But the team says it's entirely possible to improve this performance, now that they've successfully mastered their proof-of-concept. "No one had done it," says one of the team, Frans Kaashoek. "It's not like you could look up a paper that says, 'This is the way to do it.' But now you can read our paper and presumably do it a lot faster."

The researchers will be presenting their new file system at the ACM Symposium on Operating Systems Principles in October, and then presumably they'll take whatever advice they can get from their peers and go ahead and figure out how to make it commercially viable. Until then, here's a friendly reminder to stop whatever it is you're doing and click 'save', for god's sake. You're welcome.