Artificial intelligence is no longer science fiction, but reality that each of us deals with one way or another. In the army, it is now necessary to use self-learning algorithms on a wide scale.
The results of the research on “Artificial Intelligence in the Life of Poles,” commissioned by Huawei and conducted by Maison & Partners consumer and market research agency, most people associate artificial intelligence (AI) with something along the Terminator from American science-fiction movies. Most of them also think that this technology, taking the form of Arnold Schwarzenegger in his best years, is something not necessarily well-meaning to a human being. However, the AI today has nothing to do with the movie android. What’s more, it is now so commonly used that virtually all of us use it every day.
Examples are many. These can be algorithms estimating our creditworthiness in the banks, or anti-spam filters deciding which messages should go to the spam folder. In social media, we are displayed personalized ads related to what we have just been searching for in the net. An analogous system of content suggestion also works in such services as VOD, where – based on what the user has watched and rated high – he is suggested to watch another (similar) videos. There are also image-recognition mobile apps for identifying physical objects using your camera or objects on already captured images. Another example of artificial intelligence can be the so-called virtual assistants. They help with setting an appointment date or a visit to the dentist or hairdresser on the phone (voicebot) or chat (chatbot). Such solutions are to be soon introduced, i.e., in customer service on the National Health Fund (NFZ) helpline. These are only a few examples for AI algorithms which we deal with almost on everyday basis, so that we no longer notice as we encounter them on a daily basis.
“We use AI algorithms, and we don’t even realize that. AI today is a benefit for the users, common people, not only a domain for scientists,” emphasizes Col Mariusz Chmielewski, PhD in Eng., the Deputy Dean of the Military Cybernetics Faculty at the Military University of Technology (WAT) in Warsaw.
Examples of how useful artificial intelligence can be are sometimes quite spectacular. In 2019, the Massachusetts Institute of Technology (MIT) engineers developed an algorithm, which from among thousands of chemical compounds identified a substance with properties of a strong antibiotic. It was named “halicin” after the HAL 9000 supercomputer in Stanley Kubrick’s 2001: A Space Odyssey movie. Halicin reportedly kills even the bacteria which are resistant to all known antibiotics, and at the same time has low toxicity to human cells.
Artificial – Meaning Exactly What?
A dictionary entry on “intelligence” says that it is an ability to acquire and apply knowledge and skills, to understand a situation and find a proper, purposeful solution or response to it. Intelligence is a human feature, but there already are methods which help machines act analogically. Algorithms cannot think on their own and work to their own benefit, but they are able to learn, to identify phenomena and to find proper ways to react to a certain situation. How? This is explained by Col Chmielewski: “Artificial intelligence is a collection of all methods which either simulate the thinking process of the brain, or imitate structures and existing or ongoing functions in the brain, e.g. neural networks, reasoning mechanisms. In such a context, AI can perform various tasks, such as classifying objects, examining similarity of things or people by extracting features on the basis of which we are able to assess such similarity.”
Artificial intelligence should be however prepared for action. A number of ways have been developed for the so-called machine learning, which helps algorithms to gain new skills. “There exists a well-developed branch of science called knowledge engineering,” emphasized Col Chmielewski. “People have learned to build systems which on the basis of collected data and defined rules are able to deduce new facts.”
Machines can learn in various ways. Humans can supervise the process by testing to see if it is moving in the right direction. A program can also learn on its own (which is called an unsupervised learning), and all a human must do is to provide data. “The AI mechanisms use the teacher-to-student approach: here you have the assumptions and here the conclusions based on these assumptions, and now you have to learn it. We provide input data and a result, output data, and if we provide it the right way, properly prepared, there will be no errors. That’s why a knowledge engineer should properly design a learning dataset, because otherwise a resulting model will not work better,” describes Col Chmielewski.
Putting it very simply, the entire process looks more or less like that: if we have to deal with, for example, a face-recognition system, than pre-prepared algorithms should be provided with the largest possible learning dataset. It includes face images with personal data ascribed to them. While analyzing an enormous number of photo portraits, the program detects certain features by identifying characteristic elements, for example pupillary distance and eye color, the shape of the nose and hair color on the face and head. The more features we put in, the more accurately we can classify objects. The program identifies characteristic elements, learns how to distinguish between them, analyzes them and groups them, and as a result, it presents the relations between them. However, in order for everything to go smoothly, well-designed learning datasets and high quality data are required.
“Artificial intelligence generally allows for imitating thinking operations performed by humans every day and related to decision-making. The difference is that a human has the skill of abstract thinking and of intuitive and emotional decision-making, so not always can he rationally explain why he has made this decision and not a different one. AI, on the other hand, points to a precise answer, and the most recent methods allow for explaining the reasoning process and data analysis,” says Col Chmielewski.
Who’s in Charge?
In the 21st century, fully automated production lines with machines doing the all the work and people only supervising them are no surprise. In transport, we use, for instance, an automatic pilot, which is a device capable of quite complicated operations related to navigating an aircraft or a ship. These are automated operations, but they still fit into the frames previously set by a human: they monitor route parameters, and watch that all everything runs according to the procedures. When something goes differently than usual, a human takes over the control as he is able, based on his training and experience, to respond adequately to a situation.
At present, we are witnessing is another stage of this revolution: algorithms, i.e. computer programs, which are able to learn, and on the basis of delivered data improve their skills so they can in the future properly react not only in standard situations, but also when faced with accidental challenges. “Based on various symptoms, features, attributes, and with the use of input and output data, artificial intelligence makes conclusions and can recommend some decisions, or just act independently,” emphasizes Col Chmielewski. “We can talk about AI when a program is learning and, acquiring new data, correcting the accuracy of its own responses.”
The achievements in this area are impressive: the ALPHA program, developed at the University of Cincinnati, was in 2016 able to win a simulated fight with Gene Lee, an ace of American aviation and retired colonel with massive experience gained during missions and trainings. The program would win the dogfights every time, although it would be purposefully limited in such performance elements as speed, agility or engine power in comparison to the performance of the simulator piloted by a human.
What artificial intelligence did in the simulator, so far has not been possible to implement in reality. Although the US Defense Advanced Research Projects Agency (DARPA) runs the Air Combat Evolution (ACE) program, it still doesn’t assume that AI will entirely eliminate pilots from combat aviation. Quite the reverse, the systems will support pilots. “As we see it, in the future artificial intelligence will deal with the fractions of seconds in a short-distance dogfight, which will make pilots safer and more efficient in a situation where they have to simultaneously supervise a significant number of UAVs,” said Dan Javorsek, ACE supervisor. The speed of action is what almost everybody considers AI’s key feature. Col Tadeusz Zieliński, Deputy Rector for Science at the War Studies University (ASzWoj), also emphasizes that fact: “Artificial intelligence solves many problems and makes many things easier, because the system is able to analyze enormous amounts of data in milliseconds and make a decision faster than a person – which can be crucial in saving human life.”
Automated and Autonomous
It is anticipated that AI will be used in the so-called autonomous and semi-autonomous combat systems, such as e.g. the loyal wingman program. It is an armed drone (the research is conducted on the XQ-58A Valkyrie machine), the task of which is performing tasks jointly with manned combat aircraft, acting as a supporting wingman. AI-controlled drones are able to learn the flying style of a specific pilot with whom they will work in order to optimize cooperation during a combat mission, and relieve a pilot in the most difficult tasks. Drones of this type will also equip the Polish aviation arsenal as part of the Harpy Claw (Harpi Szpon) program.
“The autonomicity in combat means that a system, having no pre-entered algorithms for various scenarios, will be able to make decisions based on its own observations in a given time and place. We’re talking here about artificial intelligence which will be able to make its own decisions, independently from a human,” emphasizes Col Zieliński. Considered scenarios for such systems also include situations where AI, having identified a terrorist, will be allowed to make a decision on eliminating him or on attacking enemy base.
The systems which make independent decisions about who should be attacked and how raise serious ethical and legal doubts. This is however the thing of the future. “Military research on AI is not fully open to the public. However, based on accessible literature on the subject and my own knowledge, I can say that currently no fully autonomous combat systems exist, or at least such that have reached operational readiness,” says Col Zieliński. He immediately adds: “Are we, humans, going to allow machines to make decisions about killing? I think that in case of using AI in combat – and we’re talking about deadly uses here – a human should always be the last chain. I think this must be unconditional.”
This opinion is shared by Col (Pilot) Władysław Leśnikowski, PhD in Eng., an assistant professor at the Civil Aviation Management Department of the War Studies University (ASzWoj), who in his book on Bezzałogowe platformy w cyberprzestrzeni (Unmanned Platforms in Cyberspace) wrote: “Some experts claim that unmanned autonomous platforms will not be able to distinguish justified military targets from neutral civilians, thus creating a serious threat for those not engaged in combat activity. If artificial intelligence in unmanned autonomous weapon systems is not able to distinguish military targets from non-military ones, than this is illegal in the light of the law of armed conflict.”
And Yet, Decisions…
Machines should not make their own decisions, but they can support decision-making processes, e.g. during an analysis of a specific combat situation. We can imagine a situation where, having at our disposal data collected in real time by soldiers in a military unit with the use of observation drones, and combining them with data from command systems, AI can prepare different variants and provide a commander with optimal scenarios. This way, AI can indicate possible ways of conducting the task and suggest the least risky patrol road in difficult terrain. “Artificial intelligence allows for using saved models and match them to a given situation in order to solve, for instance, problems with commanding in crisis situations,” says Col Chmielewski. “Example: when we don’t know how our enemy behaves, we can use the database of models of action based on the so hitherto experiences in combat and doctrinal analysis. Artificial intelligence can recommend the best variant based on historical data, referring to similar situations that happened in the past. These are useful tools, which will surely help in solving problems that many commanders must face, and will certainly increase effectiveness,” he adds. It should therefore be expected that AI will be broadly used in command support systems at various levels, not only the tactical one, but also on the operational or strategic level.
Artificial intelligence can be useful particularly where the analysis of enormous databases allows for recognizing certain regularities or quite the reverse: irregularities – and recommend to a human possible solutions to existing problems. The analysis of network traffic or detecting the behavior of malware are good examples of such applications. Artificial intelligence can be used very broadly, also in the army. “The uses I can think of are automated transport systems, autonomic helicopters or unmanned autonomic ships for medical evacuation. There is also transport of medications or organs for transplantation. AI can be used on the battlefield and in the civil environment,” summarizes Col Zieliński. AI can be particularly useful in medicine, specifically as support in diagnostics or monitoring. There already exist AI systems with enormous databases with medical research papers, which can precisely diagnose rare and non-specific diseases, but also for instance oncological illnesses.
We Have It, Too
In Poland, using artificial intelligence is also something that is more and more often talked about. Col Zieliński emphasizes: “There are two strategic documents. In 2018, the »Principles for AI Strategy in Poland. Plan of Activities of the Ministry of Digitalization« was published, and a year later: »Policy for Artificial Intelligence Development in Poland In the Years 2019–2027.« Both documents clearly indicate the main idea: Poland must be a creator, and not a consumer of artificial intelligence. I think our intellectual and engineer centers have the right potential. There is a significant number of R&D institutions and companies, which can develop AI. However, we’re still in the phase of building this capability, and it takes time. At the beginning of March, in the National Center for Research and Development (NCBiR), a team for AI issues was formed, which means that only now we can expect calls for proposals, including those in the area of security and defense, related to AI development in the civil or military use.”
autor zdjęć: Grafika PZ