صدى البلد البلد سبورت قناة صدى البلد صدى البلد جامعات صدى البلد عقارات
Supervisor Elham AbolFateh
Editor in Chief Mohamed Wadie
ads

Can AI Be Controlled?


Wed 10 Jul 2024 | 04:49 PM
Pr. Abdelhak Azzouzi
Pr. Abdelhak Azzouzi
By Pr. Abdelhak Azzouzi

Finally, the United Nations General Assembly adopted the first UN resolution on artificial intelligence (AI), which was sponsored by Morocco and the United States before receiving the support of 123 member states of the United Nations when it was adopted.

This resolution is considered a historic step towards setting clear international standards for artificial intelligence and promoting safe and reliable systems for this technology.

The terms of the UN resolution titled, “Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development" shed light on the course needed to be followed in the field of artificial intelligence so that each country can seize the opportunities offered by the technology and manage the risks of artificial intelligence.

It also stresses the need to continue discussions on appropriate AI governance approaches that are anchored in international law.

I completely agree with the Moroccan diplomat Omar Hilale, who presented the resolution at a press conference with his American counterpart, Ambassador and the US Permanent Representative to the United Nations, Linda Thomas-Greenfield, when he said that this resolution is not an end in itself, but rather the beginning of a collective project to create safe and reliable artificial intelligence systems.

Yes, it is the beginning because the development of artificial intelligence is happening at a speed that surpasses imagination and exceeds all laws and limits, and its components and ramifications exceed what humans are familiar with in the field of inventions and modern technology; we find an explanation for this statement when we understand the reasons that made the British-Canadian "godfather of artificial intelligence" Geoffrey Hinton resign from Google in May 2023, clarifying his decision by saying that he wanted to "Speak freely about the dangers of AI."

Indeed, in his frequent media appearances, Hinton spoke about the dangers of artificial intelligence, technological unemployment, and the deliberate misuse of this innovation by parties he described as "malicious."

He said: "I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have. We are biological systems and these are digital systems, and the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world,".

"And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person," he added.

He also said that artificial intelligence "may soon surpass the information capacity of the human brain," and described some of the risks posed by these chatbots as "very scary."

He was right in his view, as Elon Musk, CEO of the American electric car company Tesla, and founder of the artificial intelligence research company "OpenAI", declared a few days ago in a conversation on the X platform, about developing artificial intelligence that surpasses the smartest humans, perhaps by next year or the following year in 2026.

The toxins emanating from automated software can be stimulated by the Chat GBT platforms, which fall within the framework of infinite artificial intelligence and the creation of non-human minds that are smarter than us, outperform us, and replace us; Personally, as a university professor who supervise doctoral theses and end-of-study research at the university, I am against the development of these systems because we will contribute to bringing a generation of researchers who are unable to understand, explain, theorize, and create.

Because, for example, a doctoral thesis writer in international relations and political science who works on the new international system or the new Russian strategic doctrine or constitutional systems in Africa, or on the issue of the sociology of international relations will place the topics on the platform and ordered on a number of pages, and the platform will deliver with ready and accurate work and subject to academic and scientific rules; The same is said in the fields of medicine, biology, etc.

Even worse is the false misfortune, Chat GBT can be used to write information codes without technical knowledge.

We, the supervisors, will have no choice but to accept these works because even if you search using all the information systems specialized in plagiarism, you will find these researches devoid of duplicated content as if they were written by students and researchers; the result will, of course, be devastating to societies and undermine the foundations of universities, and will stop the use of reason and diligence among millions of people, and we will bring out into the intellectual market people who have nothing to do with scientific research and will succeed easily and will eventually have diplomas that do not reflect the required conditions.

We must not forget that such platforms will increase the risk of growing electronic deception, spreading misleading information and cybercrimes, not to mention the crimes that terrorist groups can commit if robots are given the ability to carry out their goals.

This is why many countries are now trying to establish regulations for the development and use of artificial intelligence in the military field, warning of “undesirable consequences” and others related to “the issue of human involvement” in addition to “the lack of clarity regarding responsibility” and “possible unintended consequences.”

At the Euro-Mediterranean University of Fez, we were among the first to create advanced training courses in the field of artificial intelligence. Many engineers have graduated from these courses, some of whom are now employed at major international companies.

The creation of these courses stems from our belief that the university, even if it has multiple specializations, must succeed in positioning itself as a destination for training future generations in specializations that will benefit their countries, at the forefront of which is artificial intelligence.

However, we continue to defend a humane and civilizational axiom in these courses that powerful artificial intelligence systems should only be developed when we are confident that their effects will be positive and that their risks will be under control.