The web, it turns out, is a fragile place. Companies, governments, educational institutions, individuals and organizations put up and take down sites all the time. The problem is that the web has become a system of record, and when links don’t work because pages no longer exist, the record is incomplete. With the help of volunteers from Internet Archive, Wikipedia has been able to recover 9 million broken links and help solve that problem for at least one knowledge base.
The Internet Archive captures a copy of as many websites as it can to build an archive of the web. If you know what you’re looking for, you can search their Wayback Machine archive of more than 338 billion web pages, dating back to the earliest days of the World Wide Web. The problem is you have to know what you’re searching for, and that can be problematic.
A Wikipedia contributor named Maximilian Doerr put the power of software to bear on the problem. He built a program called IAbot, short for Internet Archive bot. Internet Archive also credits Stephen Balbach, who worked with Doerr and the Internet Archive,