giovedì 30 ottobre 2025

Artificial Intelligence

AI is already more and more widely used in:

Language work – in translation and interpreting in real time

Systems for identification, recognition and surveillance. It can recognise objects in temporal sequence.

Fields requiring forecasts and predictions, e.g. social trends, but also scientific trend

The management of resources, so logistics in industry and in the military

For autonomous weapons systems in the military, like lethal drones and rockets in Gaza

Generative AI is used in the production of texts, photos, videos for disinformation by states, criminal organizations or individuals and to  to infringe copyright.

AI can produce and make decisions and act (if it is in control of machines), but it doesn’t know what it is doing (it doesn’t possess awareness). So it need to be under human control. AI could arrive at an illogical conclusion, from the human point of view (based on the wrong data or instructions), so it needs human screening to avoid serious mistakes.

Moreover, if there is a bias in the data, it will be reproduced in the the AI conclusions. It is limited by its database, so it needs high quality data and huge quantities of it to be effective.

AI systems have an impact on the environment as they consume lots of energy and require cooling systems.

When AI reaches a conclusion it’s like a closed box. It isn’t always easy to reconstruct why or exactly how it has reached that conclusion.

Warfare

The use of AI in fully autonomous weapons is problematic both practically and morally. If a drone is given decision-making autonomy, can it distinguish between soldiers and civilians? And where does accountability for its actions lie?

AI can also be used in military command and control to accelerate timeframes and reaction times and to be faster than our enemy (if we don’t use it, our enemy may and will act or react faster than us). Does this encourage serious errors?

Information Warfare

With generative AI the material produced is so sophisticated that it seems real, ‘deep fake’. AI can be used to generate disinformation aimed at a target group (e.g. voters in another country) by using clickbait, for example. It can be used to create uncertainty, push certain narratives by responding to public concerns and moulding its message to target and appeal to a particular audience or individual. Russia has tried to push anti-NATO fears and distrust in Ukraine and Eastern Europe by using deep fakes. Social media are used as a conduit for the dissemination of disinformation produced by ‘content farms’. China has targeted Taiwanese public opinion in the same way, pushing anti-government narratives. This use of AI has been combined with paid China supporters on Taiwanese social media.

AI as a geopolitical reality

The US still has a technological advantage (e.g. in cloud computing) but China is accelerating its progress. The EU wants regulation but could fall behind in AI development rapidly. Some experts worry that too much regulation will slow down innovation to the advantage of a Chinese or Russian competitor who is less scrupulous. Others in Europe argue that rapid unregulated development is dangerous. So really we need a balance between competition for innovation and cooperation in regulation and control.

 At the moment Taiwan is the heart of the semiconductor industry (65% of global microchip production and 90% of the most advanced microchips) and this could lead to conflict between China and the US.

Since good AI means having large quantities of quality data as a base, whoever controls that data becomes a strategic question in both the military and industial fields. So new alliances may emerge, technological blocs. AI could be like nuclear power in terms of creating power blocs.

Will states remain in control of AI, or will it be private companies (OpenAI, Google AI, Microsoft Pilot, Microsoft AI etc…)? as seems to be happening to some extent in the current space race (SpaceX and Starlink).

Cybersecurity

AI is being used to in a constant and escalating attempt to breach the security of public institutions (as a form of hybrid warfare) and private institutions (to gain competitive advantages or damage a rival. The private sector will often need the assistance of the state to protect itself adequately. The investment costs in cybersecurity will be high and countries will need to cooperate with allies where possible (NATO, the EU or bilaterally for example).

All institutions, both publc and private will require effective cyber security systems able to evolve, counter and keep ahead of the threats fron state-actors and non-state actors (it is often difficult to tell because of proxy actors). They will also require their staff to be awre of all this and possess the necessary IT skills to deal with the security threats, and to constantly update those skills.

Cyberattacks can target the control of critical physical structures like power stations, the movement of ships in and out of ports, the Israeli nuclear enrichment program or software used in supply chain logistics. Satellite systems are also highly vulnerable. Weapons systems can be compromised. AI can be used in cyberattacks aimed at undetected spying operations in both the public and private sectors. Banks or healthcare systems (the NHS in the UK) can be targetted for disruption and brought to a halt by AI systems producing overwhelming quantities of requests for information or action. Malware can lie dormant and evolve like a virus before being activated.

All this requires security cooperation between the public and private sectors and early warning system to be put in place and regularly updated. There needs to be a well-funded cyber security agency with a control centre that is able to identify a threat rapidly. There needs to be a cyber-emergency response team for every essential public and private entità and good information-sharing between them in order to under stand the threat and respond in a coordinated way.

And for diplomats?

The last paragraph clearly applies to diplomacy and diplomats like any other field or group.

Diplomats and diplomatic institutions can also be targetted by deep fakes and cyberattacks, so they need to constantly aware of the threats and combine traditional diplomatic skills with IT ans AI skills.

AI allows a diplomat to analyse large quantities of information and produce predictions and forecasts. This is extremely valuable but comes with obvious risks since it depends on the accuracy and comprehensivenessof the database and this will depend on the reliability of the sources of the data.

To work successfully the Ministry will require investment in AI tools in cooperation with democratic partners in order to ensure an effective framework for the maintenance and security of its digital infrastructures.

Ethics, cooperation and competition

Regulation might be agreed globally in some fields – perhaps in medicine? However, agreement between the West and Russia and China in the military field seems much less likely, at least at the moment. 

Nessun commento:

Posta un commento

Nota. Solo i membri di questo blog possono postare un commento.