Artificial intelligence-powered intelligent reflecting surface systems countering adversarial attacks in machine learning

ABSTRACT


INTRODUCTION
Next generation networks called NextG or 5G and 6G, are gaining more attention in both industry and academia.Consumers are expecting a high demand and new ways of communication.Based as a study by the international telecommunication union, mobile network traffic on 5 th or 6 th generation future networks will constantly increase year over year using thousands of pentabytes [1], [2].The principle NextG networksis to transmit data immediately with least amount of delay between hardware and software devices and is commonly used in fields such as e-health medical services,cloning, artificial authenticity, various autonomous vehicles and online e-learning [3].Next generation technologies are also used to enhance computing and communications systems.Artificial intelligence (AI) is one of the strong platforms that is very important in developing inventory models in the next generation network [4], [5].
An intelligent reflecting system (IRS) upgraded with multiple input and multiple output (MIMO) uses millimeter wavesand is a powerful and efficient method interms of channel capacity and data transmission ratio.It is also capable of reconfiguring wireless systems to obtain more concentration.IRS ISSN: 2089-4864  Artificial intelligence-powered intelligent reflecting surface systems … (Rajendiran Muthusamy) 415 utilizes a huge amount of minimum-cost passive send-back elements whose signals constructively add to the destination network, improving the output of the wireless communication networks.The AI model reduces its effective training, despite the various tools such as cyber security and AI, yet metamorphic and polymorphic security attacks.These adversarial attacks manipulate the AI model by intentionally mixing the original data with unwanted signals to the dataset and misguiding it [6].
In this article, an AI-IRS system is proposed for next generation networks to reduce the vulnerability to a minimum level in the academic and business environment [7], [8].This involves: i) calculating the susceptibilities of the AI methods of the IRS system by the adversarial attacks using fast gradient sign method (FGSM) and basic iterative method (BIM); ii) proposing a defensive distillation mitigation algorithm to improve the robustness and efficiency of the AI-model on the IRS system; and iii) training the AI-IRS systems to produce and maintain robust output data under undefended and defended methods using FGSM and BIM adversarial attacks.

METHOD 2.1. Intelligent reflecting surface wireless communication
Wireless communication quality can be enhancedusing IRS wireless communication system which significantly improves the efficiency of communication between a sender and receiver.The destination receives both the line of sight (LOS) waves from the LOS connection and constructive send-back signals from the IRS recipient during idle time [9].IRS can improve communication systems by dynamically changing wireless channels and adjusting the signal reflection surfaces via a large number of inexpensive passive reflecting devices.Though IRS-supported hybrid wireless network with passive and active components promises to achieve long-term and cost-effective capacity growth, it needs to overcome certain obstacles such as channel estimation, deployment, and reflection optimization [10].
This suggests that machine learning (ML) model has to be trained to detect the domain signifiers to expect the possible rate with each IRS interaction communication route.This can be achieved by the current developments in deep learning in which, the transmitter should reflect the sent data to the receiver and the IRS interaction route should be compatible with the highest expected realistic rate to be used.The method is referred to as the AI-method on the IRS system, this work where its weakness is examined and evaluated using defensive distillation mitigation strategy [11], [12].

Adversarial machine learning
Adversarial ML is used in a variety of applicationsandis primarily used to implement malicious attacks or reasons for ML model malfunctioning [13].The principle is to train the models to automatically understand the original designs of the working procedure and relationships in data using the trained algorithms [1].Post training is mostly used to calculate and analyze the outlines in given information [14].Figure 1 shows the steps involved in wrong prediction due to attack on machine learning technique.The precision range of the trained model is important for obtaining a better outcome, which is addressed as a generalization.The various types of adversarial machine-learning attacks include data evasion, poisoning and model attacks [15].Adversarial ML methods are used to finalize, locate adversaries and produce planned betrayals of the ML model.The sample model input should confuse the model by executing an invalid classification with the given data that can be used to operate certain blind dots in image classifiers [16], [17].This article's goal is to examine the most recent adversarial ML techniques to create and identify adversarial samples.Both targeted and non-targeted evasion attempts aim to persuade models to incorrectly identify malicious examples as valid data points.Targeted attacks attempt to persuade ML models to include adversaries in a special target model.Non-targeted attacks are designed to force ML models to order the adversarial example as a different model than reality [18].The goal of data poisoning is to generate false data points that will be used to train ML  ISSN: 2089-4864 Int J Reconfigurable & Embedded Syst, Vol. 13, No. 2, July 2024: 414-423 416 models in producing the desired results.Data poisoning can be used to produce the desired results using ML methods.Some examples of adversarial attacks are FGSM and BIM.

Fast gradient sign method
FGSM is a one-step attack in which the perturbation is added in a single step rather than over a loop.The fast gradient sign method involves the following three steps: first step is tocompute the loss function and forward propagation and next step involves the calculation of gradient based on the pixels of the image and finally orward the image pixels a little bit in the direction of the estimated gradients to increase the loss in the previous steps [19].
A negative likelihood loss technique is applied to determine how closely the model's prediction matches the actual class.The computation of the gradients concerning the image pixels is unusual.Gradients are used in neural network training to determine the direction in which weights need to be changed to reduce the loss values.As an alternative, in this case, input image pixels are moved in the gradient's direction to increase the loss value.Back propagating the gradients from the start to the weight is the most commonly used method when training neural network to determine the direction by which a specific weight is altered deep in the neural network.In such situations, a similar idea [20] the gradients being returned to the input image from the output layer is applied.
The following mathematical formula is given to move the weights to reduce the loss value in neural network training: the following mathematical formula is used to increase the loss and move the pixel values of the image: furthermore, the following algorithm is applied for perturbation in the fast gradient sign method.
Where   is the adversarial image, εis the perturbation and (∇  (,   )) is the first derivative of the loss function concerning the input .In the case of deep neural networks, this can be calculated using the back-propagation technique.
The following equation is used for targeted FGSM attacks: is equal to the negative of .In this case of targeted attacks, the loss function between the targeted class and the predicted class is minimized, whereas an untargeted attack maximizes the loss function between the predicted class and the true class [21].

Basic iterative method
ML algorithms iteratively study the data that permits the machine to find the hidden forms within the data [22].The objective of a basic iterative algorithm is to find the best solution from the data set.These algorithms learn from previous experience that consistent and repeatable decisions are made to obtain the best solution [23].
The method can be repeated several times with small step sizes.This technique involves clipping the pixel values between the results in each phase to ensure that they are in the vicinity of the original image.That is within a certain range of the previous image's pixel value.
The following mathematical calculation is used for generating the perturbed pictures using this basic iterative method: and  are the adversarial images at the ith step and input image respectively,  represents the means loss function,   is the output for input , ∈ is the tuneable value and alpha is the step size.An overview of iterative algorithm is provided in Figure 2.

WORK CONCEPT
In neural network architecture and defensive distillation technique (DDT), the input data received from the user devices is used to IRS prediction method.Defensive distillation training networks is covered using a defended model which has deep neural networks with large network and shallow neural networks with small neural network [24], [25].The overall system design for the proposed AI-powered intelligent reflecting surface system is shown in Figure 3.The figure shows that during the prediction model training, a shallow neural network model, protected against adversarial ML attacks in mobile base stations.Adversarial attacks are applied in defended and undefended method to evaluate the methods under any attacks.

Neural network architecture
Neural network technique also called deep learning, the principles of human brain while processing data using a computer [26].As shown in Figure 4, it uses interconnected nodes or neurons in a layered structure that resembles the human brain.The neural network inputis a signal from the transmitter and

distillation technique
Defensive distillation technique is one of the most popular adversarial training method that adds flexibility to the classification process of an algorithm, making the them less prone to attacks.DDT employs defensive knowledge distillation to train the model to be more powerful.Knowledge distillation was previously introduced by Catak et al. [28].In this technique the knowledge of the master (densely connected neural network) is transferred to a slave (sparsely connected neural network).In knowledge distillation, the slave should perform similarly to the master by imitating the master's output, which causes soft labels to be used to train the slave network using the master node.
The workflow of the DDT consists of three steps.
Step 1 is training the master model with the loss function for the classification of inputs.Step 2 again trains the previously trained master model with the defensive distillation method that produces a soft label and a cross-entropy loss function to generate the corresponding soft labels as outputs and the last step involves training the slave model using the soft labels from the previous process labels to produce a better, more robust and more accurate method.Algorithm 1 provides the defensive distillation technique used in this study to counter adversarial attacks in machine learning.The defensive distillation parameters of this study are provided Table 1

EXPERIMENTAL AND RESULTS
AI-powered IRS methods evaluated using mean square error (MSE) algorithm.MSE scores are used to evaluate the model vulnerabilities under protected and unprotected conditions.The MSE is calculated as: (7) where:  denotes total number of samples,   the actual data value and  ̂ the predicted data value.
The output represented in the form of bar plots (Figures 5 and 6) and histogram (Figures 7 and 8), which shows the MSE values for each adversarial ML attack on the protected and the unprotected systems.Table 2 shows that the prediction of performance outputs for the protected and unprotected AI-powered IRS method countering the attacks.The publicly available ray trace MIMO datasets are adopted to generate the training data and compare with the AI-powered IRS method.Based on the ray-tracing data obtained from the value ray-tracing simulation outline, the MIMO dataset parameter was used to build the MIMO channels.
The adversarial attack on the AI-powered method has become more popular with several attacks.BIM and FGSM types are used in this study to generate adversarial examples.The performance of each model was estimated through the MSE parametric.
The trained AI-powered IRS method was simulated using a python, tensor-flow framework executed using a Google Colab Tesla GPU with 16 GB memory.The adversarial input data were generated using the Cleverhans library. Figure 5 shows that MSE values for the selected attack method under the attack powers from 0.01 to 0.10.The MSE values are similar to both BIM and FGSM algorithms and is around 0.08 for all attack powers.Furthermore, MSE values for BIM attacks rise with increasing attack power, ranging between 0.008 and 0.009.The output shows that AI-powered models are considered vulnerable to adversarial attacks.Mitigation technique are broadly used to improve the robustness of AI-powered model against adversarial attack [29].Based on this observation, the DDT was applied in this method to reduce the vulnerability against adversarial attacks.The performance of the AI-powered is estimated in terms of MSE after applying the mitigation method.The MSE values against adversarial attacks range from 0.01 to 0.04 in Figure 6.The above cure depictsthat the AI-powered is still prone to adversarial attacks, its robustness is better against adversarial  ISSN: 2089-4864 Int J Reconfigurable & Embedded Syst, Vol. 13, No. 2, July 2024: 414-423 420 attacks.It was observed that the model can resist any attack under low attack power that is less than 0.30.Increasing the mean square value implies that high power attack is excepted.The effect of the mitigation technique on the performance is not the same for all attacks.The MSE values can go between 0.001 and 0.003 under the FGSM and BIM attack respectively whereas under high attack power it goes up to 0.003 for BIM.On the other hand, the attack power under the FGSM attack is low when the mitigation technique is applied to the model.The output indicates that the defensive distillation model significantly contributes to the model's robustness against adversarial attacks.

CONCLUSION
AI is the most important technology to the improvement of the performance of next generation network.This article examines the vulnerability of AI-powered IRS models against FGSM and BIM adversarial attacks.The impacts of the mitigation method such as defensive distillation improves the robustness in next generation networks.The output indicate that the AI-powered next generation networks are vulnerable to adversaial attacks.The overall result shows that BIM is the most effective adversarial attack (30%) on defended than undefended methods.The proposed defensive distillation mitigation method provides better results for defended FGSM attacks (22%) than undefended FGSM attacks.Future works can focus on vulnerabilities for various adversarial attacks such as Carlini and Wagner, momentum iterative method (MIM) and projected gradient descent (PGD) as well as the defensive distillation mitigation method. ISSN: 2089-4864

Figure 1 .
Figure 1.Adversarial attacks on data

Figure 2 .
Figure 2. Overview of the iterative algorithm


ISSN: 2089-4864 Int J Reconfigurable & Embedded Syst, Vol. 13, No. 2, July 2024: 414-423 418 receiver of the uplink pilot.The neural network's output is a prediction score based on the input signals from the transmitter and receiver.Neural network consists of multiple layers of networks [27].The output falls under multilayer layer perception in which inputs are processed by multiple layers of neurons.

Table 1 .
. The loss function is defined as (6): Defensive distillation parameters