CATA2023:Papers with Abstracts

Papers
Abstract. This paper proposes a new AI (Artificial Intelligence) enabled SaaS (Software as a Service) Framework to facilitate the work of Fashion Designers to evolve new and innovative Fashion Designs. Fashion Designs are, more often than not, complex; using concepts of Service Choreography using Blackboard Architecture, we propose a new model for facilitating collaborative / individual innovative work of Fashion Designers.
Abstract. The objective of this work is to make a proposal to improve the quality of business processes in a biodiesel plant. As a first approximation, the analysis and studies of the conceptual models of business processes that the company had were carried out, with the aim of being able to have a panoramic view of the current situation of the organization. For this, a framework was applied to measure the quality of business process models, which provides a set of metrics and indicators to carry out said measurement. The objective of the frameworks is providing to the organizations a means to help them to maintain objective and accurate information about the maintainability, understandability, coupling and cohesion of the models, facilitating the evolution of the Business Processes of the companies involved in continuous improvement. It provides support to the management of BPs by facilitating early evaluation of certain quality properties of their models. With this, the organizations benefit in two ways: (i) guaranteeing the understanding and dissemination of the BPs and their evolution without affecting their execution, (ii) reducing the effort necessary to change the models, this reduces the maintenance and improvement efforts. This framework is made up of two evaluation methods. Both methods allow facing the same problem from two different approaches. One approach refers to the numerical and another, is closer to linguistic expressions similar to everyday language. Both methods provide important results to different areas of the business, giving the framework an added value when analyzing the BP conceptual models, since it allows to choose the way to evaluate the models, according to the characteristics that are desired to analyze of the business models.
Abstract. With the rapid increase in the use of mobile phones and other technologies, there has been a proportional growth in malware that tries to collect sensitive user data. Android is the most popular operating system for smartphones and has a great potential of becoming a target for malware threats. Scareware is a type of malware that tries to get users to provide valuable information or download malicious software through social engineering. This research aims to find out if machine learning is a viable option to prevent the consequences of scareware by accurately detecting them. In this investigation, four supervised machine learning (ML) algorithms were used in a dataset called CICAndMal2017 with 85 attributes for each of 11 Android scareware families and benign samples. We were able to achieve an accuracy of 96% which helped us to conclude that machine learning can and should be used to detect scareware. The machine learning models were then tested to calculate the accuracy for classifying each scareware family.
Abstract. Explaining the presence or absence of transformations in nature, such as chemical or elementary particle reactions, is fundamental to our thinking about nature. This paper describes a generic approach to the search for such conserved quantities. In the work that follows we formulate a generic approach to conserved such explanations by summoning techniques from Linear Algebra.
Abstract. In recent years, autonomous driving vehicles are attracting growing commercial and scientific attention. How to detect and recognize objects in a complex real-world road environment represents one of the most important problems facing autonomous vehicles and their ability to make decisions on the road and in real time. While color imaging remains a rich source of information, LiDAR scanners can collect high quality data under different lighting conditions and can provide high-range and high-precision spatial information. Expanding object detection by processing simultaneously data collected by a color camera and a LiDAR scanner brings new capabilities to the field of autonomous driving. In this paper, a 3D object detector is proposed with focal loss and Euler angle regression to optimize the detector’s performance. It uses a bird’s-eye view map generated from a LiDAR point cloud and RGB images as input. Results show that the proposed 3D object detector reaches a speed over 46 frames per second and an average precision over 90%. In addition, a more compact detector is also proposed that processes the same input data three times faster with only slightly lower accuracy.
Abstract. One method to reduce vehicle congestion in a road traffic network is to appropriately control traffic signals. One control scheme for traffic signals is a distributed control scheme in which individual traffic signals cooperate locally with other geographically close traffic signals. Deep reinforcement learning has been actively studied to appropriately control traffic signals. In distributed control, it is important to select appropriate cooperative partners. In this study, we propose a method for selecting appropriate cooperative partners using deep reinforcement learning to the distributed traffic signal control.
Abstract. Computing architecture continues the pendulum swing between centralized and distributed models – driven by technological innovation in CPU/GPU architecture, memory/storage, I/O, networking, performance, and emerging use cases. For a number of years, the most recent architecture has been the cloud-based centralized computing model, which is now shifting to a distributed edge computing model. New businesses, technologies, usage models, and new applications are driving this change. The rapid growth of IoT across all segments of society is driven by 5G, Edge, low-cost sensors, embedded SoC controllers, and new enterprise applications. The impact is more data being generated by these smart sensors, increased demand for storage, data analytics, and network capacity to move this data to adjacent nodes or cloud models. These and other parameters are driving the emergence of the Edge computing model – and there are many Edge types. However, in terms of optimisation and efficiency, these approaches may not be the best solution.
Abstract. This paper focuses on the design and architecture of an application that will programmatically pre-process data. This application aims to extract and provide clean airport terminal passenger throughput data within the United States. A different application will then use this data to forecast passenger throughput models via a friendly user experience webpage. The purpose of forecasting passenger throughput throughout the U.S. airport terminals is to improve the Transportation Security Administration (TSA) checkpoint operations. Such as increasing TSA personnel in security checkpoints when the forecast expects a high volume of passengers during the holiday season. On the other hand, decreasing the personnel workforce in other airport terminal checkpoints where they do not forecast a high passenger throughput. TSA seeks to improve its personnel scheduling using this forecasting model. In addition, the forecasting model will improve passenger satisfaction with non-excessive wait times at security checkpoints and does not jeopardize the safety of passengers with adequate security protocols.
Abstract. In recent years, there has been massive growth in the usage of IoT devices. Cloud computing architecture is unable to meet the requirements of bandwidth, real-time response, and latency. To overcome these limitations, fog computing architecture is introduced, which responds to requests from IoT devices and only, if necessary, forwards requests to the cloud. Nonetheless, there are still some requests that need to go to the cloud and get affected by the shortcomings of the cloud. In this work, we propose to add a peer-to-peer (P2P) structure to the fog layer. We have considered our recently reported 2-layer non-DHT-based architecture for P2P networks in which at each level of the hierarchy existing networks are all structured and each such network has a diameter of 1 overlay hop. Such low diameters have huge significance in our proposed P2P fog model and improve fog computing by presenting very efficient data lookup algorithms. In this model, fog nodes can work together to complete the client requests. Consequently, fog nodes are able to fulfill the client requests in the fog layer, which ultimately decreased overheads on the cloud. Additionally, to improve the security in communication in the architecture, we have utilized ciphertext policy attribute-based encryption (CP-ABE) and presented a new secure algorithm.
Abstract. Derangement is one well-known problem in the filed of probability theory. An in- stance of a derangement problem contains a finite collection C of n paired objects, C = {(x1 , y1 ), ..., (xn , yn )}. The derangement problem asks how many ways to gener- ate a new collection C′ ̸= C such that for each (xi,yj) ∈ C′,i ̸= j. We propose an efficient dynamic programming algorithm that divides an instance of the derangement problem into several subproblems. During a recursive process of unrolling a subproblem, there exists a repeated procedure that allows us to make a use of a subsolution that has already been computed. We present the methodology to formulate a concept of this subproblem as well as parts of designing and analyzing an efficiency of the proposed algorithm.
Abstract. Load balancing is one of the main challenges in distributed computing systems such as the cloud computing systems. It helps improve throughput while reducing the response time and cost for data and computation intensive applications. In this paper, we present an adaptive load balancing scheme for heterogeneous distributed computing systems whose objective is to provide a fair allocation of jobs to computing resources that reduces the cost of executing the jobs in the system. Using simulations, we compare the performance of the presented scheme with that of existing load balancing schemes.
Abstract. In this paper, we have considered a recently reported 2-layer non-DHT-based structured P2P network. It is an interest-based system and consists of different clusters such that peers in a given cluster possess instances of a particular resource type. It offers efficient data look-up protocols with low latency. However, the architecture lacks in one very important aspect: it is assumed that no peer in any cluster can have more than one resource type, and this could be a very hard restriction practically. This is true for all interest-based works existing in the literature. Therefore, in the present work, we have addressed this issue of generalizing the architecture to overcome this restriction and so far, have come up with some significant initial results. Work is being on to complete the generalization process. We have identified some of our previously reported data look-up protocols that will need to be modified in order to accommodate the new findings toward the generalization and while doing so, we aim at keeping the data look-up latencies of these probable modified protocols unchanged. In addition, our objective is to consider security in communication in the generalized architecture as well. To achieve it, we aim at using mainly public key-based approach for the different look-up protocols reported earlier, because results obtained so far in this direction indicate that the required number of public-private key pairs will be much smaller than the number of symmetric keys if symmetric key-based approach is used.
Abstract. On social media, false information can proliferate quickly and cause big issues. To minimize the harm caused by false information, it is essential to comprehend its sensitive nature and content. To achieve this, it is necessary to first identify the characteristics of information. To identify false information on the internet, we suggest an ensemble model based on transformers in this paper. First, various text classification tasks were carried out to understand the content of false and true news on Covid-19. The proposed hybrid ensemble learning model used the results. The results of our analysis were encouraging, demonstrating that the suggested system can identify false information on social media. All the classification tasks were validated and shows outstanding results. The final model showed excellent accuracy (0.99) and F1 score (0.99). The Receiver Operating Character- istics (ROC) curve showed that the true-positive rate of the data in this model was close to one, and the AUC (Area Under The Curve) score was also very high at 0.99. Thus, it was shown that the suggested model was effective at identifying false information online.
Abstract. Decomposing a network into communities is one of the most used techniques in network science. Modularity is typically used to measure the goodness of such a decomposition. In this paper we develop a method which allows us to begin with a crisp decomposition (no overlaps) and move to an overlapping decomposition while increasing the modularity. We also show that the same technique can be used to improve existing overlapping decompositions.
Abstract. The American commercial airline industry is a crucial part of United States infrastructure and is so large and widespread that it affects all of its citizens in one way or another. There are many moving pieces involved in this industry, but we believe that we can make a significant impact when it comes to forecasting future passenger throughput. We look to utilize machine learning to create a prediction model which can eventually be used by the Department of Homeland Security to improve security and overall customer experience at airport terminals. The results of this study show that a polynomial regression model can provide utility as well as predictions with an acceptable margin of error.
Abstract. Shell and Tube Heat Exchanger is most widely used and most efficient heat exchanger in industries. The outlet temperature of the shell and tube heat exchanger system has to be kept at a desired set point according to the process requirement by using controllers. Many controllers such as PID, feedback plus feed forward, Fuzzy logic, Internal Model based PID controller are used to control the temperature. The control system objective is to control the hot fluid outlet temperature by manipulating the inlet cold fluid flow rate. The transfer function of the shell and tube heat exchanger process is obtained using energy balance equations. Designing of the PID Controller is done by conventional Cohen-Coon tuning method and advanced IMC method. The closed loop results are obtained using PID controller both Cohen-Coon method and IMC method. The closed loop responses for various set point changes in hot fluid outlet temperature and disturbance in inlet temperature of cold fluid are studied. The experiment and MATLAB simulations are carried out by using the above parameters of CC-PID and IMC-PID and the data are noted for different set points. Comparison is made between the results of both the experiment and simulations. And the compared the results of Cohen-Coon method and IMC tuning method. On comparing the results, we can demonstrate that IMC based PID controller gives better responses in terms of lesser overshoot and faster settling time. The present work emphasis is about the experimental demonstration of advanced controller such as Internal model controller (IMC) to a general process such as shell and tube heat exchanger control.
Abstract. One of the most difficult analyses of all time is stock market predictions. Expert analysts and software engineers are collaborating to create a stable and reliable platform for predicting future stock value. The fundamental difficulty is that a variety of factors will influence the price fluctuations. Stock recommendations are vital for investment firms and individuals. However, no unique stock selection approach can capture the dynamics of all stocks without adequate analysts. Nonetheless, the majority of extant recommendation techniques are built on prediction algorithms ANN (Artificial Neural Network) to buy and keep high-yielding companies. We offer a unique strategy in this paper that uses reinforcement learning to recommend a stock portfolio based on the Yfinance data sets. We will present an ARIMA framework for recommendation systems, as well as a foundation for determining the system's value. Within this paradigm, we do probabilistic studies of algorithmic approaches. These studies illustrate the value of recalling earlier activities and examines how this recollection may be used.