A few years ago, automation was seen as an absolute good: a way to increase business efficiency, and relieve people of having to do painstaking busy work. But, as technology evolves, enthusiasm is often replaced by fear, and a desire to fight for the place of humans in a world of machines.
In the late 1700’s and early 1800’s, during the Industrial Revolution, this fear led to the uprising of the Luddites: in 1811, craftsmen who had lost their jobs to automation systematically destroyed cotton processing plants. In the late 1900’s, that same fear was behind the popular films Blade Runner, The Terminator, and The Matrix. The highest expression of this fear is that machines will revolt, and artificial intelligence will take over everything. And, judging by the box office numbers, this is a theme that remains relevant now.
A two hundred-year old fear — the fear of losing your job — is partially compensated for by the possibility of choosing a new career. Professions that have fallen into obsolescence are being replaced by new ones, which were not possible to even imagine before the computer age. Programmers, UX designers, and software architects are some examples of these new professionals. Information technology has redesigned and expanded the service sector, providing an enormous amount of people with opportunities to make money. Millions of today’s Uber drivers had no idea just a few years ago that they would want to drive people around for money. Automation leads not to the squeezing out of people from their jobs, but to the natural process of replacing old fields with new ones.
A person supposes…
Futurists often exaggerate the speed of technological advance, and the vector of their predictions often changes, too.
When Twitter first came out, someone accurately pointed out: «Ten years ago, we thought we were going to have flying cars, but all we got was 140 symbols on our phones.»
In the 1990’s, when computer use was changing en masse from text-based interfaces to graphic ones, there were big hopes for technologies of automatic interface testing. A person opens an app, the app completely remembers the sequence of his actions, and it can then be played back again. When the program does something it’s not supposed to, errors can be automatically identified, which makes this technology very useful to programmers. In actual fact, automatic testing did not develop until later, in the 2000’s, when Web applications became widespread.
Now we are seeing a revival of the idea of the interface recording the sequence of the user’s actions, in the form of Robotic Process Automation. In essence, rather than building complex direct integrations between two programs, we record what the user clicked on, and then replace him with a robot. This idea was first implemented in 1987, by Visual Basic — a product experienced Microsoft Office users will know. Today, it has turned out to be simpler and less expensive to create connections between corporate systems in this way, than to hire programmers to connect them by coding.
In the late 2000’s, geo-location services like Foursquare were very popular. The possibility of finding your coordinates through a mobile app by several means (GPS, cell tower triangulation, WiFi IP address, etc.) gave rise to a new wave of expectations. There was a moment of hype. You could become «Mayor» of some cafe. People thought this would be some great new social network, and would overshadow all previous ones. Now, almost no one uses these services — they have turned into a pretty good database of location coordinates, assembled through crowdsourcing.
Area of responsibility
Lately, we are hearing a great deal of prophesies about driverless cars, which will supposedly leave cab drivers, truck drivers, and Uber drivers out of work. However, everything doesn’t happen overnight, despite the fact that all leading world car companies, and many IT companies, are working to make this happen.
While the problem of automated driving has been partially solved in the railroad and aviation industries, it is more difficult to solve in the automobile industry. It’s one thing to program an autopilot that will drive the car during the day, on a sunny highway; it’s a different thing altogether on a snowy mountain road. And it’s not so easy on regular roads, either: speeding and fatal accidents are more likely. One such incident has already occurred with a Tesla car, only 6 miles from the company’s offices in Silicon Valley.
Which brings us to the most important question: who is responsible for the harm done by a program, or a computer? Is it the manufacturer? The owner? The software developer?
For example, there’s a whole world of accounting and finance software products. These develop, change, improve, become smarter, and live on the Cloud. Is the profession of accounting dead? No, it isn’t. Even though 20 years ago it seemed like it might die. But the accountant is still responsible for the numbers.
The question of responsibility is, and will continue to be, the main mitigating factor to technological progress. The irony is that despite the fact that a computer makes significantly fewer mistakes than we do, unlike the human using it as a tool, it has no freedom, no property, and no career. Autopilot can land an airplane without a single button being manually pressed, but in a difficult situation — both in the air, and in business — it is the human who makes the decisions. No software, for example, will ever navigate a business through a crisis. All professions that carry a serious responsibility with them will survive. Software will only make people’s work easier, and help them make decisions.