Quantcast
Channel: data storage
Viewing all 37 articles
Browse latest View live

All the Digital Data In the World Is Equivalent to One Human Brain

$
0
0
Supercomputer
An IBM Blue Gene/P supercomputer rack.
Wikimedia Commons

If you could put all the data in the world onto CDs and stack them up, the pile would stretch from the Earth to beyond the moon, according to a new study. The world's technological infrastructure has a staggering capacity to store and process information, reaching 295 exabytes in 2007, a reflection of the world's almost complete transition into the digital realm. That's a number with 20 zeroes behind it, in case you're wondering.

Martin Hilbert and Priscila López took on the unenviable task of figuring out how much information is out there, and how its storage and processing have changed over time. Some of their findings seem obvious, like the fact that Internet and phone networks have grown at quite a clip (28 percent per year), while TV and radio grew much more slowly. But others are more surprising, like the nugget that 75 percent of the world's stored information was still in analog format in 2000, mostly in the form of video cassettes. By 2007, 94 percent of the world's info was digital.

In 2007, all the general-purpose computers in the world computed 6.4 x 1018 instructions per second, according to the study. Doing this by hand would take 2,200 times the period since the Big Bang.

In 1986, the first year the team examined, 41 percent of all computations were still done by calculator, the researchers found. By 2000, personal computers were doing 86 percent of the computing; by 2007, video game hardware accounted for 25 percent of the work. On the whole, gaming consoles have more computing power than the world's supercomputers, the study found.

Cell phones are catching up, too — they accounted for 6 percent of all computing in 2007. It's worth noting that's the year the first iPhone debuted, and a year before anyone could buy a mass-market Android phone, so it's a fair guess this number has increased exponentially since then.

Hilbert and López surveyed more than 1,000 sources and sifted through an incredibly thorough 60 categories of analog and digital technologies, from paper to vinyl records to Blu-ray discs. In all, they say the world was able to store 295 trillion optimally compressed megabytes; communicate almost 2 quadrillion megabytes; and carry out 6.4 trillion MIPS (million instructions per second) on general-purpose computers.

If you sympathize, and feel a bit overloaded as this work week ends, remember that in the grand scheme of information, this is but a speck. It's still smaller than the number of bits stored in all the DNA molecules of a single human adult, the authors say.

"To put our findings in perspective, the 6.4 x 1018 instructions per second that humankind can carry out on its general-purpose computers in 2007 are in the same ballpark area as the maximum number of nerve impulses executed by one human brain per second," Hilbert and Lopez write.

Feeling smart now?


Steamy Cloud Servers Installed In Homes and Businesses Could Be Used As Furnaces

$
0
0
Data Center
Microsoft Research scientists have proposed capturing waste heat from file servers and using it to heat buildings.
The Planet via Flickr

Finding a physical space to store our voluminous cloud-based data is a problem, sure, but keeping the servers cooled down is another, much bigger problem--and an environmentally unfriendly one at that. Instead of installing expensive cooling systems, future networked data centers could use the waste heat of computing to keep people warm.

A new paper from Microsoft Research proposes using servers as "data furnaces," installed in homes or businesses and connected to the air ducts. As a rectangular metal cabinet, it would look like any other furnace, attached to the ductwork and hot water pipes. Homeowners wouldn't even notice the difference — except, of course, for the huge power draw that a server requires.

Microsoft researchers Jie Liu, Michel Goraczko, Sean James and Christian Belady, working with Jiakang Lu and Kamin Whitehouse at the University of Virginia, explain that the exhaust from a typical computer server is not hot enough to use for electricity generation. But servers' exhaust typically runs between 104 and 122 degrees F, Gizmag points out, which is enough to heat up a home. The servers are usually cooled with fans and air conditioning systems in data centers. For this reason, data centers might be installed in chilly areas or away from populated areas, but that's not very efficient.

Data furnaces would be much smaller than typical data centers, consisting of 40 to 400 Internet-connected CPUs, depending on the size of the home or business, and would enable homeowners and IT firms to conceivably save money and resources. There would be less need to construct huge new spaceship-like data centers, for instance, and micro-centers distributed throughout a residential area or office park would provide lower network latency, Gizmag notes.

Liu et. al point out that old servers can be easily recycled into homes, serving as backup for disk maintenance.

Security is one obvious question — how could IT companies ensure that a client's confidential data is safe in some random family's basement? What about floods, power outages, or server snafus?

Microsoft answers these questions by suggesting that host households agree to change the air filters occasionally and to shut off the servers if required, in exchange for free heat. What about free Windows updates? No word on that, sorry.

[via Gizmag]

IBM Is Building the Largest Data Storage Array Ever, 120 Petabytes Big

$
0
0
Hard Drive Platter Closeup
Approximately 200,000 of these hard drives make up IBM's new array.

Researchers at IBM's Almaden, California research lab are building what will be the world's largest data array--a monstrous repository of 200,000 individual hard drives all interlaced. All together, it has a storage capacity of 120 petabytes, or 120 million gigabytes.

There are plenty of challenges inherent in building this kind of groundbreaking array, which, says, IBM, is destined to be used for, as Technology Review writes, "an unnamed client that needs a new supercomputer for detailed simulations of real-world phenomena." For one thing, IBM had to rely on water-cooling units rather than traditional fans, as this many hard drives creates heat that can't be subdued in the normal manner. There's also a sophisticated backup system that senses the number of hard disk failures and adjusts the speed of rebuilding data accordingly--the more failures, the faster it rebuilds. According to IBM, that should allow it to operate with the absolute minimum of data loss, even none.

IBM's also using a new filesystem, designed in-house, that writes individual files to multiple disks so different parts of the file can be read and written to at the same time.

This kind of array is bottlenecked pretty severely by the speed of the drives themselves, so IBM has to rely on software improvements like that new recovery and filesystem to up the speed and enable the use of so many different drives at once.

Arrays like this could be used for all kinds of high-intensity work, especially data-heavy duties like weather and seismic monitoring (or people monitoring)--though of course we're curious as to what this particular array will be used for.

[MIT Technology Review via Engadget]

Amazing Databases: The Combined DNA Index System

$
0
0
CODIS Collage
Wikimedia Commons

In 1990, when the FBI began building its master DNA database—the Combined DNA Index System, or CODIS—investigators could generally use DNA analysis only for cases in which they possessed both crime-scene evidence and a specific suspect. Not anymore.

Now police can compare genetic evidence gathered at the crime scene with millions of known DNA samples, finding matches, generating new suspects, linking together seemingly unconnected crimes, and identifying people who had been missing for decades.

Most of the samples in the database are taken from crime suspects and convicted felons, but analysts at forensic labs are increasingly loading the database with genetic material from crime scenes, unidentified remains and missing persons. So far, investigators have used CODIS to help with more than 143,000 cases.

In August, for example, police were able to identify the remains of a boy who went missing in 1989 when his twin brother's DNA turned up in CODIS for unrelated (and undisclosed) reasons. That same month, CODIS logged its approximately 10 millionth DNA profile: that of serial killer Ted Bundy, which means that local police and federal agents nationwide can now test forensic evidence from cold-case files against Bundy's DNA.

Check out the other nine most amazing databases in the world here.

Amazing Databases: FAOSTAT

$
0
0
FAOSTAT Collage
Wikimedia Commons

Monitoring the global food supply involves tracking data on agriculture, land use, fishing, forestry, food aid, nutrition and population growth. To make sense of it all, researchers at the Food and Agriculture Organization (FAO) of the United Nations built FAOSTAT, the world's largest database of food and agricultural information, with more than a million statistics covering five decades and 245 countries and territories.

Using FAOSTAT, researchers can quickly determine that in 2000, humans consumed 249 more calories per day than they did 20 years earlier; that 70 percent of the water that humans use goes to agriculture; that nearly two billion sheep and goats exist in the global herd; and that even though the planet produces enough food to feed everyone, 13 percent of people in the world are undernourished. Last year the FAO made FAOSTAT free, and since then the number of users has jumped from 400 to 11,500.

Among them are governments and NGOs plumbing FAOSTAT for ways to feed people more efficiently. In one recent study, China's Ministry of Agriculture compared FAO data on farmland use in 19 countries with the amount of staple foods those nations produce. One of the surprises: China's farms have more workers than they need and would actually be more efficient if more people migrated to cities.

Check out the other nine most amazing databases in the world here.

Amazing Databases: The Genographic Project

$
0
0
Genographic Project Collage
Wikimedia Commons

The best record of early human migration is found not in ancient bones or archaeological artifacts, but in the DNA of people living today. In 2005, to make that information accessible, the National Geographic Society and IBM launched the Genographic Project.

The project sells DNA-collection kits to people and provides them with an analysis of their origins. Participants are encouraged to donate their results to an anonymous database, which also stores DNA profiles of indigenous people collected by anthropological geneticists in 10 field labs. By mining the 420,000 profiles stored in the database, scientists can track genetic mutations across populations, retracing the steps of ancient humans.

In 2008, by studying the maternal lineages of 624 African genomes, researchers at the Genographic Project discovered that even though all humans share DNA from the same 200,000-year-old maternal ancestor ("Mitochondrial Eve"), early humans subsequently split into separate populations. Small bands of humans evolved in isolation for as much as half of our history as a species, before reuniting to form one population in the Late Stone Age.

Check out the other nine most amazing databases in the world here.

Amazing Databases: The MD-Pro

$
0
0
MD:Pro Collage
Wikimedia Commons

With a catalog of more than 15 million malicious computer programs, MD:Pro is the Centers for Disease Control of the cybersecurity world. Frame4 Security Services, which was established in the Netherlands in 2006, created the database as a resource for security experts, who need access to malware to identify new threats and develop and test defenses.

Frame4 analysts gather samples using computers called honeypots, which are programmed to attract and misdirect malware, and by soliciting donations from antivirus researchers and cybersecurity experts. For a fee, analysts can download samples from MD:Pro's FTP site; some samples come with source code and Frame4's analysis of the malware. (To avoid selling samples to malware programmers and hackers, Frame4 screens its users.) Since the addition of a second processing engine earlier this year, MD:Pro has been growing by more than a million samples a month.

Check out the other nine most amazing databases in the world here.

The World's Most Amazing Databases: The Encyclopedia of Life

$
0
0
Encyclopedia of Life Collage
Wikimedia Commons

Four years ago, the Smithsonian Institution, the Field Museum of Natural History, Harvard University, the Missouri Botanical Garden, the Marine Biological Laboratory and the Biodiversity Heritage Library joined together to create a comprehensive collection of data about every living thing on Earth.

So far, the consortium's researchers have collected and vetted information on 40 percent of the planet's 1.9 million known species. Want observations describing the nocturnal behavior of the flying lemur? How about a map showing the distribution of the dark honey fungus, whose underground filament network spans thousands of acres and might make it the largest organism in the world? They're in there.

The researchers gather information from hundreds of sources (including such databases as the Barcode of Life and Morphbank), work it into a consistent format, and organize it into individual species pages. Combining disparate data into a single, searchable database should make it possible to see new connections between different forms of life. By looking for lifespan patterns or similarities in resistance (or susceptibility) to disease—and by doing so across a broad range of EOL species pages—biologists will aim to find new species and genes to target in longevity studies, vaccine development and other medical research. At the current pace, EOL will hold data on every known plant, animal, insect and microbe species by 2017.

Check out the other nine most amazing databases in the world here.


The World's Most Amazing Databases: The International Panel on Climate Change's Data Distribution Centre

$
0
0
Climate Change Collage
Wikimedia Commons

Before the International Panel on Climate Change launched its Data Distribution Centre (DDC) in 1998, researchers who needed climate-change projections had to get them from the handful of scientists who specialized in computing-intensive statistical climate modeling. Modelers became backlogged with requests; studies languished.

Worse, they often used different assumptions and data formats, making it difficult to quickly compare results. Now, however, the DDC serves as the world's central repository for projections about future climate. DDC analysts convert data from different models into compatible, downloadable formats before feeding it into the master database.

If a scientist wants to study how a variety of global-warming scenarios would affect, say, maize production in China, he can choose from data sets generated by 49 different statistical models and download data that's been converted into a usable, apples-to-apples format.

Check out the other nine most amazing databases in the world here.

Obama Gives Government Agencies Four Months To Make Digital Plan

$
0
0
The National Archives
NCinDC via Flickr

Despite this era's amazing advances in data storage and data mining, the accumulated records of our federal bureaucracy are largely — and perhaps unsurprisingly — languishing in the early 20th century. Paperwork and filing cabinets still comprise the bulk of government records. President Obama would apparently like to change this, so this week he gave federal agencies four months to come up with a Web 2.0-inspired way to bring their records management systems online.

"With proper planning, technology can make these records less burdensome to manage and easier to use and share,"Obama wrote.

Within four months, the heads of each federal agencies must submit a report to the federal archivist's office explaining how the agencies will improve their current records management processes. This extends to every type of electronic record — emails from citizens, social media connections, even cloud-based records storage. Four months after that deadline, the archivist and the director of the Office of Management and Budget have to come up with a game plan. They'll probably have to create some type of government-wide records management system, one that works across the entire bureaucracy and is more efficient than current filing methods.

The National Archives and Records Administration already stores 124 TB of data, the result of a 10-year digitization effort outsourced to Lockheed. But the program, like so many government programs, was plagued by cost overruns and delays. A newly focused effort will have to be more efficient.

The idea is to make federal records more accessible to the public at large, Obama said.

"Records transferred to NARA provide the prism through which future generations will understand and learn from our actions and decisions," Obama wrote. "When records are well-managed, agencies can use them to assess the impact of programs, to reduce redundant effort, to save money, and to share knowledge within and across their organizations. In these ways, proper records management is the backbone of open government."

Whether or not you believe Obama is a champion of said open government is another question.

For our part, we're wondering what this new federal database will look like. PDFs? How about Adobe InDesign files, which we totally love working with? "Maybe just a big Case Logic book filled with CD-ROMs, handcuffed to a secret service agent that never leaves the President's side," suggests co-contributor Clay. Sounds like a good place to start.

[Computerworld]

Scientists Build a Data Storage Device Out of Salmon DNA

$
0
0
Storing Data in Salmon DNA
The entire Library of Congress, on ice.
Joe Mabel via Wikimedia

It's good smoked, straight up on the grill with a little lemon and butter, or rolled into sushi. And now, thanks to researchers at Taiwan's Tsing Hua University and the Karlsruhe Institute of Technology in Germany, salmon is also good sandwiched between two electrodes. Using silver nanoparticles, a couple of electrodes, and a thin layer of salmon DNA, those researchers have developed a "write-once-read-many-times" (WORM) data-storage device that they think could eventually lead to a replacement for silicon.

Their device basically works based on the way silver atoms behave inside a thin film of salmon DNA. Shine a UV light on such a system, and the silver atoms will bunch into nanoparticles within the DNA film. Add electrodes to both sides of the film, and you've got an optical data storage device.

And here's how it works: when there's no (or little) charge passing through the device, only low (or no) current is allowed to travel through the device (which makes sense). That creates the device's "off" position. If you slowly increase the voltage however, the device is unable to hold a higher charge. That is, until you reach a certain voltage threshold (about 2.6 volts), at which point the device suddenly switches to high conductivity with good retention of that state.

That switching ability above a certain voltage threshold creates two distinct "off" and "on" states that can be used to store data like any other optical data device. And the changes in conductivity are basically irreversible, meaning once a device has been switched, it stays in that conductive state (either on or off). That means you should be able to write something into a DNA-based data storage and then retrieve that info later.

[Gizmag]

World's Smallest Memory Bit Stores Data Using Just 12 Atoms

$
0
0
Smallest Storage Unit
Spin-polarized imaging with a scanning tunneling microscope reveals the structure of the world's smallest magnetic data storage unit. It consists of just 12 iron atoms ordered in an antiferromagnetic structure.
Sebastian Loth/CFEL

The world's smallest magnetic data storage unit is made of just 12 atoms, squeezing an entire byte into just 96 atoms, a significant shrinkage in the world of information storage. It's not a quantum computer, but it's a computer storage unit at the quantum scale. By contrast, modern hard disk drives use about a million atoms to store a single bit, and a half billion atoms per byte.

Until now, it was unclear how many (or how few) atoms would be needed to build a reliable, lasting memory bit, the basic piece of information that a computer understands. Researchers at IBM and the German Center for Free-Electron Laser Science decided to start from the ground up, building a magnetic memory bit atom-by-atom. They used a scanning tunneling microscope to create regular patterns of iron atoms aligned in rows of six each. They found two rows was enough to securely store one bit, and eight pairs of rows was enough to store a byte.

Data was written into and read out of the bits using the STM — so it's not like this type of bit will be integrated into hard disks anytime soon. But it answers some fundamental questions about the nature of classical mechanical systems, said Andreas Heinrich, the lead investigator into atomic storage at IBM Research Almaden and an author on a new paper describing the teeny bit. The team was interested in the transition from quantum to classical behavior, he said.

"If you take a single atom, you have to look at quantum mechanics when you describe its behavior," he said in an interview. "As you make the (system) bigger and bigger, several iron atoms start talking to each other, and at some point you can ignore all of this quantum behavior and just think of them as a classical magnetic structure." It turns out that point is around 12 atoms big.

"Many people would anticipate you would have to use quantum mechanical systems to describe these structures," Heinrich said. "That was the most surprising thing to me."

At the smallest scales, quantum effects blur stored information. A bit using six atoms would switch magnetic states — switching from "0" to "1"— about 1,000 times per second, for instance, which is much too frequently to be useful for data storage, Heinrich said. Eight atoms switch states once per second. But 12 atoms switched their states infrequently enough to be usable for storage — instead, an outside magnetic influence (in this case, the STM) changes their states. The nano magnets are only stable at a chilly 5 degrees Kelvin, or -450 degrees F.

The other breakthrough in this paper is the bits' antiferromagnetism — this marks the first time antiferromagnetism has been used to store data. Ferromagnets, used in most modern data storage and other applications, use magnetic interactions between iron atoms to align all the atoms in a single direction. This creates a magnetic field that can be read out. This becomes a problem at the teeniest scales, however, because tightly packed magnetic bits can interfere with each other — this limits the downsizing of data storage systems. But this new 12-atom bit uses antiferromagnetism — the atoms are aligned in opposite directions, meaning they spin in alternating directions. The iron atoms were separated by nitrogen atoms and induced with the STM to spin differently, Heinrich said. This allowed them to be packed closer together, greatly increasing storage density.

The researchers switched the bit's magnetic state five times to store the ASCII code for each letter of the word "think," one of Big Blue's slogans.

Sebastian Loth, who left IBM for CFEL four months ago and is lead author of the paper, said the 12-atom bit raises plenty of new questions for classical computing at quantum scales.

"We can now use this ability to investigate how quantum mechanics kicks in. What separates quantum magnets from classical magnets? How does a magnet behave at the frontier between both worlds? These are exciting questions that soon could be answered," he said.

The paper appears in this week's issue of Science.

Tiny Think
A white signal on the right edge corresponds to logic 0 and a blue signal to logic 1. Between two successive images, the magnetic states of the bits were switched to encode the binary representation of the ASCII characters "THINK."
IBM Research

Using Heat to Record Information Could Improve Data Storage Speed a Hundred-Fold

$
0
0
Heat-Based Magnetic Switching
The laser pulse temporarily aligns the two ferrimagnetic materials (the red and blue in this image) while powered on, and then the materials revert once it's off.
Richard Evans, University of York

An international team of researchers claims to have figured out a way to use ultrafast bursts of heat, rather than the typical magnetic field, to record a bit of information on a hard drive--a development they say could vastly increase the efficiency and speed of hard drives. They say it could record multiple terabytes per second, hundreds of times faster than current methods.

Typical magnetic recording technology for hard drives uses an external magnetic field to invert the poles of a magnet. The speed of the recording depends on the strength of the magnetic field. But the physicists, led by a team at the University of York, says they have figured out a way to use heat rather than a magnetic field to cause the same effect.

The heat in question is a simple ultrafast heat pulse, beamed with a laser. At only 60 femtoseconds, it's exceedingly brief, but manages to provoke a ferromagnetic state in certain materials.

It's very interesting on a theoretical level; this is a change in how we thought data storage worked in pretty basic ways. But given the growing prominence of solid-state storage--which is not magnetic, and can theoretically perform these operations far faster than a magnetic hard drive--we're not sure this is really going to catch on. Still, interesting stuff.

The article appears in the journal Nature Communications.

DNA Inside Cells Can Serve As Rewritable Data Storage

$
0
0
DNA Storage
Under ultraviolet light, petri dishes containing cells glow red or green depending upon the orientation of a specific section of genetic code inside the cells' DNA. The section of DNA can be flipped back and forth using the RAD technique.
Norbert von der Groeben

DNA is the blueprint for life, and now it can serve as a computer to monitor life's processes. Bioengineers transformed DNA into a one-bit memory system that can record, store and erase data within living cells. A future DNA memory device could be used to track cell division and differentiation in cancer patients, perhaps, or to monitor what happens as cells get sick or age.

We've seen plenty of body-monitoring computer systems, from chips that can swim through the bloodstream to nanowires that can tap the heart or other muscle. But so far, these systems are limited to a few processes. This system could work like rewritable memory in your computer, recording and erasing information again and again.

The system flips DNA sequences back and forth between two states, basically the genetic equivalent of a binary switch. One DNA orientation equates to "one," and the other equates to "zero." The process uses an enzyme taken from bacteriophages to cut and recombine the DNA. The recombinase enzyme moves to a particular swath of DNA and flips it around so its base pairs basically read backward, and a second signal flips it back.

Stanford researchers Jerome Bonnet and Drew Endy call it a "recombinase addressable data" module, or RAD. The team worked for three years to find the right balance of proteins that would reliably flip the DNA sequences back and forth without degrading.

To test whether it worked, the team modified E. coli bacteria to fluoresce in different colors depending on the state of the DNA bit. In lab tests so far, it's been able to monitor the activity of E. coli as they double more than 100 times. The team's goal is to produce a byte, combining 8 of these RAD bits to build a larger memory system.

The work appears in this week's issue of the Proceedings of the National Academy of Sciences.

FYI: What’s The Most Durable Way To Store Information?

$
0
0

USB Drives
David Arky/Getty Images

Despite claims to the contrary, the storage media in wide use today—CD-ROMs, spinning hard drives, flash memory, etc.—aren’t very durable. “You’re talking years, not decades,” says Howard Besser, a professor and archivist at New York University who was named a pioneer of digital preservation by the Library of Congress. “A CD-ROM was originally supposed to last 100 years, but many fail in 10.” 

Old-fashioned paper has done very well by comparison. Until people made a habit of adding acidic chemicals to their paper in the 19th century, books could last five hundred years or more. And while paper has its vulnerabilities—to fire and water, for example—so do more newfangled technologies. A hard disk, for instance, may suffer from a loss of mobility. “You’ve got to have it spinning regularly or you’re not going to be able to play it,” says Besser. “It’s kind of like the Tin Man in The Wizard of Oz.”

At a 1998 conference, Besser and 12 others worked out a plan for the perfect long-term storage device: They would etch images into platinum with a laser and bury the platinum in the desert. “Ideally, we would put a nuclear-waste facility next to it,” Besser adds, “so people will never forget where it is.”

But even the most indestructible data storage won’t be of any use if no one can decode the contents. Archivists also need to preserve the languages or programs used to save information, whether that’s ancient Greek or Word for Windows 95. Besser and his colleagues worry that this decoding issue will be the real bottleneck. “The durability of something is a far smaller problem than the other problems that we have,” he says. 

This article originally appeared in the December 2013 issue of Popular Science.


Amazing Databases: The Genographic Project

$
0
0
The best record of early human migration is found not in ancient bones or archaeological artifacts, but in the DNA of people living today. In 2005, to make that information accessible, the National Geographic Society and IBM launched the Genographic Project.

Amazing Databases: The MD-Pro

$
0
0
With a catalog of more than 15 million malicious computer programs, MD:Pro is the Centers for Disease Control of the cybersecurity world. Frame4 Security Services, which was established in the Netherlands in 2006, created the database as a resource for security experts, who need access to malware to identify new threats and develop and test defenses.

The World's Most Amazing Databases: The Encyclopedia of Life

$
0
0
Four years ago, the Smithsonian Institution, the Field Museum of Natural History, Harvard University, the Missouri Botanical Garden, the Marine Biological Laboratory and the Biodiversity Heritage Library joined together to create a comprehensive collection of data about every living thing on Earth.

The World's Most Amazing Databases: The International Panel on Climate Change's Data Distribution Centre

$
0
0
Before the International Panel on Climate Change launched its Data Distribution Centre (DDC) in 1998, researchers who needed climate-change projections had to get them from the handful of scientists who specialized in computing-intensive statistical climate modeling. Modelers became backlogged with requests; studies languished.

Obama Gives Government Agencies Four Months To Make Digital Plan

$
0
0
Despite this era's amazing advances in data storage and data mining, the accumulated records of our federal bureaucracy are largely — and perhaps unsurprisingly — languishing in the early 20th century. Paperwork and filing cabinets still comprise the bulk of government records.
Viewing all 37 articles
Browse latest View live




Latest Images