The New York Times has some place in the domain of five to seven million physical photographs in its huge document, a considerable lot of which go back over a century. The pictures archive essential minutes and contain important records of our ongoing history, yet the printed versions are defenseless against disintegration (they luckily endure flooding in 2015). To secure the photographs, the Times is digitizing the chronicle with Google Cloud. 

Not exclusively will filtering the majority of the pictures help protect them, however journalists should discover it far less demanding to dig into the documents than by leafing through printed versions in file organizers. A significant number of the photographs have relevant data on the back, for example, the time and area where they were taken, inscriptions and when they were distributed in the daily paper. In this way, the Times, with the guide of Google's tech, made a framework that perceives and forms penmanship and content on the two sides of every photograph. 

Huge numbers of the photographs from the Times' later past will be advanced in any case, so this is more about saving the recorded pictures, and utilizing Google's AI to discover stories covered up inside them. It ought to be far less difficult, for example, to recount accounts of how a particular area developed after some time through the paper's photography. The Times may likewise exploit Google's vision AI apparatuses to distinguish protests and places in pictures, which could make sorting them simpler and encourage correspondents and editors uncover them when they are perusing or looking through the document.

Best Technology for Website