A few days ago, Clipboard published an article on Artificial Intelligence (AI) which talked about the algorithm and how important it is to know how it works in an environment increasingly permeated by artificial intelligence.
It pointed out that AI systems are much more manoeuvrable by the data they are trained with (we will now see what this means) than by the algorithm.
Let us take this as a starting point to explore this issue. Which, is not an alternative. But the other side of the coin of the same problem.
Data and algorithm work together to produce a certain result. And it is from their interference that a response, a work, an action is generated.
What the new rules do
That the algorithm has a major impact on the near future is also demonstrated by Regulation (EU) 2022/2065 of 19 October 2022, known as the Digital Service Act (DSA). It came into force on 16 November 2022 and is intended to regulate and control the work of large platforms and search engines with more than 45 million monthly active users.
The DSA has required platforms to indicate the number of their users by 17 February 2023. And according to the data provided, the VLOPs (Very Large Online Platforms) were Alibaba AliExpress, Amazon Store, Apple App Store, Booking.com, Facebook, Google Play, Google Maps, Google Shopping, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, Twitter, Wikipedia, YouTube, Zalando and the search engines Bing and Google Search.
VLOPs will have to comply with the regulation. And meet a number of rather stringent obligations regarding privacy, advertising, moderation services and fake news control.
The monitoring will also extend to verifying the impact of the algorithms on society in the medium and long term. The algorithm is not the programme code, it is what comes first. It is the functional logic scheme, it is the phases, the method, it is in a nutshell the software’s way of reasoning.
Knowing the algorithm means being aware of the connections and deductions, of the consequentialities and thus being able to know why, from certain input data (input) a result (out) is obtained. Which may be a response or an action commanded by the machine.
Artificial intelligence, however, is more complex. There are algorithms that call up other algorithms. Which in turn call up other algorithms, in a network that complicates their knowability.
Learning to learn
The industry is on the hunt for data in every way and form. And the (pretend) free applications and services offered on the Net, which users swoop into and provide gluttonous information about their habits, are an all too familiar example of this.
Data serve the machine to learn how to reason. Just as a student learns new concepts and techniques. In artificial intelligence, neural networks are increasingly similar to the human brain in terms of the way they learn and are capable of learning from experience.
The systems are ‘self-learning‘. In the sense that they process data and learn from them and become capable of delivering increasingly accurate results. These are the ‘machine learning’ systems, ‘learning machines’. Which then become so good that they can produce, on the basis of what they have learnt, new results and new algorithms. These are the ‘deep learning’ systems.
It is therefore evident that, as in every human learning process, even in technological learning, the data that the machine processes is crucial because artificial intelligence is not capable, in itself, of making ethical evaluations but absorbs everything that is fed to it without any critical discernment.
In this sense, training can be wrong, due to inability or superficiality. But it can also result in training or indoctrination if a system is deliberately instructed with biased data.