Supply Chain Council of European Union |
Supply Chain Risk

Google’s Sundar Pichai doesn’t want you to have a clear view on the dangers of AI – Newsdio

Alphabet and Google CEO Sundar Pichai is the last technology giant to make a public call to regulate AI, while encouraging lawmakers to a diluted enabling framework that does not put hard limits on what is Can do with AI technologies.

In an opinion article published in the Financial Times today, Pichai calls on the headlines to regulate artificial intelligence. But its tone injects a suggestive underlying current that inflates the risk to humanity of do not Allow technologists to continue with business as usual and apply AI at a population scale, and Google’s chief says: “AI has the potential to improve billions of lives, and the greatest risk may be not to do so,” thus seeking to frame & # 39; without limits “as the safest option for humanity.

Simultaneously, the tone minimizes any negative aspect that may cloud the greater good than Pichai implies that AI will unlock, presenting “possible negative consequences” as simply the inevitable and necessary price of technological progress.

It is about managing the level of risk, it is the main suggestion, instead of directly questioning whether the use of A risk-laden technology, such as facial recognition, should be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas, but they also caused more accidents,” Pichai writes, attacking the story for an example of self-service while ignoring the vast climatic costs of combustion engines (and the resulting threat that now represents the survival of countless species on Earth).

“The Internet made it possible to connect with anyone and obtain information from anywhere, but it was also easier for the disclosure of erroneous information,” he continues. “These lessons teach us that we must have a clear idea of ​​what could go wrong.”

To read “with clear eyes”: Accept the interpretation of the technological industry of “collateral damage”. (Which, in the case of misinformation and Facebook, seems to work to feed democracy in the meat grinder aimed at the ads).

Meanwhile, it is not mentioned at all in Pichai’s discussion of AI risks: the concentration of monopoly power that artificial intelligence seems to be very good for supercharging.

How funny that.

Of course, it is not surprising that a technological giant that, in recent years, has changed the name of an entire research division to & # 39; Google AI & # 39 ;, and that has previously been summoned by its own force labor on a project that involves the application of AI to military weapons technology, should be pressuring lawmakers to set & # 39; limits & # 39; of AI that are as dilute and abstract as possible.

The only thing that is better than zero regulation is the laws made by useful idiots who have fallen into the trap, online and are plunged by false dichotomies exposed by the industry, such as those that claim that it is an innovation or intimacy & # 39 ;.

Pichai’s intervention also comes at a strategic time, with US lawmakers looking at AI regulation and the White House apparently aligns with the wishes of the technology giants of “innovation-friendly” rules that facilitate their business. (Namely: this month the White House technology director, Michael Kratsios he warned in a Bloomberg opinion article against “preventive, burdensome or duplicate rules that would unnecessarily hinder innovation and AI growth”).

The new European Commission, Meanwhile, a firmer line has been playing both in AI and in great technology.

He has made technology-driven change a key political priority, with President Ursula von der Leyen making public noises about stopping technology giants. He has also pledged to publish “a coordinated European approach to the human and ethical implications of Artificial Intelligence” within his first 100 days in office. (She took the post on December 1, 2019, so time is running).

Last week, a leaked draft of the Commission’s proposals for AI regulation across the EU suggests that it is leaning towards a relatively light touch approach (although, the European version of the light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) – although the document raises the idea of ​​a temporary ban on the use of facial recognition technology in public places.

The document states that such a ban “would safeguard the rights of individuals, in particular against any possible abuse of technology”, before arguing against a “powerful measure that could hamper the development and adoption of this technology”, in favor of relying on the provisions of current EU legislation (such as the EU data protection framework, GDPR), in addition to the relevant adjustments to current product safety and liability laws.

While it’s still unclear how the Commission will jump into AI regulation, even the light touch version it is considering is probably much more burdensome than Pichai would like.

In the opinion article, he asks what he considers as “sensible regulation”, that is, adopting a “proportionate approach, balancing potential damages, especially in high-risk areas, with social opportunities.”

For “social opportunities,” read: The abundant business opportunities opportunities ’Google is spying on: assuming that the expected scale of additional income that can be obtained by supercharging the expansion of artificial intelligence-driven services in all types of industries and sectors (from health to transportation and anywhere else) is not seen hindered by strict legal limits on where AI can Really be applied.

“The regulation can provide a comprehensive guide while allowing a customized implementation in different sectors,” Pichai urges, establishing a preference for enabling “principles” and “revisions” after application, to maintain the flow of AI spices.

The opinion article only briefly touches facial recognition, despite the fact that FT editors choose to illustrate it with an image of technology. Here Pichai again seeks to rethink the debate around what is, by nature, an extremely hostile rights technology, speaking only of the approval of the “nefarious uses” of facial recognition.

Of course, this deliberately obfuscates the inherent risks of allowing black box machines to make algorithmic identity assumptions every time a face passes through a public space.

You cannot expect to protect the privacy of people in such a scenario. Many other rights are also at risk, depending on what else the technology is used for. So, really, any use of facial recognition is fraught with individual and social risks.

But Pichai is trying to surprise lawmakers. He doesn’t want them to see risks inherent in such a powerful and powerful technology, pushing them toit only keeps a narrow and ill-intentioned subset of uses and “negative” and “negative” AI consequences as worthy of “real concerns.”

And then he hits the drum again for “an early and regulated approach to applying AI ”(our emphasis) – emphasizing the regulation that, above all, gives the green light for the application of AI.

What technologists fear most here are the rules that tell them when artificial intelligence cannot be applied.

Ethics and principles are, to some extent, mutable concepts, and which technological giants have become accustomed to claiming as their own, for public relations purposes, even by fixing self-proclaimed “railings” to their own AI operations. (But, of course, there are no real legal ties there).

At the same time, data mining giants like Google are very smooth operators when it comes to playing with existing EU rules around data protection, such as infesting their user interfaces with confusing dark patterns that push People click or remove their rights.

But the ban on applying certain types of AI would change the rules of the game. Because it would put society in the driver’s seat.

Some forward-looking regulators have demanded laws that contained at least one moratorium on certain “dangerous” AI applications, such as facial recognition technology or autonomous weapons such as the drone-based system that Google was previously working on.

And a ban would be much harder for the giants of the platform simply to bow at their will.

Related posts

Schools in Wales ‘could miss real risk’ of extremism


Ekagra Partners, LLC | U.S. GAO


The EU Corporate Sustainability Due Diligence Directive | King & Spalding