CATA 2020:Papers with Abstracts

Papers
Abstract. We present our process and development of a web-based system to explore the publication networks of faculty in California public universities in the fields of computer science and electrical engineering. Our project explores collaboration networks in the fields of computer science and electrical engineering with a focus on publication networks and an analysis of these collaborations with a focus on geospatial organization (which institutions are collaborating with which other institutions). We present our data gathering process, which relies on the Scopus[9] database (Scopus represents a “comprehensive overview of the world’s scientific research output across all disciplines”), and we present the development of a web-based tool using python and the Google Maps API [12] in order to allow visualizations and explorations of geospatial structures of the publications networks. These visualizations drove further network analysis, which we present here as well.
Abstract. We have developed a framework for multi-user Virtual Reality experiences aimed at video games played over a network. Features include tracked avatars, interactable physics objects, peer-to-peer with a user matching system, and voice chat, as well as options to customize these modules for a wide range of support. We go into detail on how several implementation details, such as networking, voice chat, and interaction, work. We also go into detail on how to use the library in Unity for your own projects. We also talk about avatar representation in VR, and how this tool can be used to facilitate many different types of avatar representation.
Abstract. An adaptive software system is known as an application that can adapt itself based on different conditions of users. There are multiple conditions/criteria that can be used to direct how an application would adapt. Spatial visualization (VZ) is one of several human spatial abilities that is used to predict human’s performance when using a computer application. Therefore, a difference in VZ level is a suitable choice as an adapting indicator, i.e., high VZ and low VZ users should get different features on a user interface (UI) to complete the same task. In this paper, we look at three studies where we asked participants to verify a set of housing addresses using a location-based application on an Android tablet with different versions of the application, especially, an adaptive version of the application was involved in the last study. We found that, for high VZ participants, the number of UI errors that participants created was significantly smaller when they were equipped with the adaptive software. We refer to a UI error (User Interface Error) as an error where a user tapped on a non-sensitive region of the screen. The results of the three studies and hypothesis tests for significance are reported.
Abstract. The Nevada Research Data Center (NRDC) is a research data management center that collects sensor-based data from various locations throughout the state of Nevada. The measurements collected are specifically environmental data, which are used in cross-disciplinary research across different facilities. Since data is being collected at a high rate, it is necessary to be able to visualize the data quickly and efficiently. This paper discusses in detail a web application that can be used by researchers to make visualizations that can help in data comparisons. While there exist other web applications that allows researchers to visualize the data, this project expands on that idea by allowing researchers the ability to not only visualize the data but also make comparisons and predictions.
Abstract. Machine learning is an attractive tool to make use of in various areas of computer science. It allows us to take a hands-off approach in various situations where previously manual work was required. One such area machine learning has not yet been applied entirely successfully is cybersecurity. The issue here is that most classical machine learning models do not consider the possibility of an adversary purposely attempting to mislead the machine learning system. If the possibility that incoming data will be deliberately crafted to mislead and break the machine learning system, these systems are useless in a cybersecurity setting. Taking this into account may allow us to modify existing security systems and introduce the power of machine learning to them.
Abstract. Autonomous vehicles or self-driving cars emerged with a promise to deliver a driving experience that is safe, secure, law-abiding, alleviates traffic congestion and reduces traffic accidents. These self-driving cars predominantly rely on wireless technology, vehicular ad-hoc networks (VANETs) and Vehicle to Vehicle (V2V) networks, Road Side Units (RSUs), Millimeter Wave radars, light detection and ranging (LiDAR), sensors and cameras, etc. Since these vehicles are so dexterous and equipped with such advanced driver assistance technological features, their dexterity invites threats, vulnerabilities and hacking attacks. This paper aims to understand and study the technology behind these self-driving cars and explore, identify and address popular threats, vulnerabilities and hacking attacks to which these cars are prone. This paper also establishes a relationship between these threats, trust and reliability. An analysis of the alert systems in self-driving cars is also presented.
Abstract. The power of technology is one which supersedes any other tool of communication ever formulated and implemented by human beings. The internet has been long cited by scholars and practitioners alike to be an empowerment tool that allows individuals to either seek, receive or dole out information and ideas without any basis being drawn on boundaries or geographical locations. This, therefore, means that online communication has to be protected in lieu with the international dictums and pretensions that call for the right to freedom of expression.
Abstract. A variety of attacks are regularly attempted at network infrastructure. With the increasing development of artificial intelligence algorithms, it has become effective to prevent network intrusion for more than two decades. Deep learning methods can achieve high accuracy with a low false alarm rate to detect network intrusions. A novel approach using a hybrid algorithm of Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) is introduced in this paper to provide improved intrusion detection. This bidirectional algorithm showed the highest known accuracy of 99.70% on a standard dataset known as NSL KDD. The performance of this algorithm is measured using precision, false positive, F1 score, and recall which found promising for deployment on live network infrastructure.
Abstract. Traditional computer simulation is replaced by 3D scans of the temporary urban fabric in a Taiwan heritage site for the quantitative assessment of local evolvement. The transfer of as-built point cloud model to the vertical and horizontal sections enables the inspections of evolved openness types on an old street enclosed by building facades and remodeled building components. Temporary fabrics, which consist of the installations and components, are represented in terms of the modification ratio on facade. The ratio contributes to the balance between the maintenance of cultural identity and the development of supporting commercial facilities made by local efforts. The variation changes along the old street cross districts of preservation, commercial and residential areas. Result shows the highest ratio exist in commercial district, where the highest ground activity along the entire street has created a typology of T or enclosed section of open space, as shown in point cloud model which is so realistic that no former computer models can display.
Abstract. South Africa has one of the highest GINI coeefficient indicating a high degree of inequality in the country. There is also extreme unemployment with the expanded unemployment rate being 38.3% and in some subsections of the economy as high as 68.3%. Despite this, the Informal Sector (non-agricultural) employs over three million people. Many corporates offer products to the formal sector, the informal sector or both. The commercial margins are often very slim in the informal sector. This paper looks at the use of Internet of Things, Geographical Information Systems, and GeoHashes to provide business intelligence to merchants in the Informal Sector thereby helping them improve their competitive advantage.
Abstract. Organizations are highly dependent on their software in carrying out their daily activities. Unfortunately, the repeated changes that are applied to these systems make their evolution difficult. This evolution may be necessary to maintain the software, replace or upgrade it. In the case of complex and poorly documented legacy systems, modernization is the only feasible solution to achieving the evolution goals. The OMG (Object Management Group) consortium created the Architecture-Driven Modernization (ADM) initiative to cope with the challenges of modernization. This initiative proposes, among other things, modernization through model-driven engineering (MDE). In this context, the modernization of a legacy system, not developed in an MDE environment, begins with its migration towards this type of environment. This migration raises the problem of finding the models necessary for the use of MDE representing this system.
In this paper, we present a new bimodal approach to ADM modernization by enabling automatic and interactive modes to discover a view of the implementation platform of a legacy object-oriented system. Also, we present the key ideas of the algorithms behind this discovery process. Finally, we describe our prototype tool that implements our approach. This tool has been validated on several systems written in C# and Java languages.
Abstract. The extreme levels of intensity with which people live in large urban centers began to affect the productivity and quality of life of cities and their inhabitants, some of which have reached extremes close to collapse, as is the case of traffic congestion in the main cities of the world. On the other hand, from digital innovation and economic development, it is necessary to provide intelligent solutions to current problems, promoting the entrepreneurial ecosystem and the collaborative economy. Each government should administer, manage and update information from each region, and distribute it in the most convenient way to each company or agency that is part of a smart city. To achieve smart cities, we must train digital citizens and take into account the accessibility conditions provided by technology. For this, the implementation of Internet of Things (IoT) at all possible levels is of the utmost importance. From these points of view, mobility has become a central issue of urban development. Its relationship with sustainability issues and its ability to generate competitiveness and quality of life, puts us before the need to rethink its future. These are certain considerations to include in possible models of quality that allow to study the degree of intelligence of the cities. When talking about indicators or metrics, it begins to pose a problem of being able to generalize / extend each of these measures. In this line of research, a board of metrics and indicators has been defined that are applicable to an ad hoc quality model whose objective is to study the degree of intelligence of cities.
Abstract. There is an increasing demand from numerous applications such as bioinformatics and cybersecurity to efficiently process various types of queries on datasets in a multidimensional Non-ordered Discrete Data Space (NDDS). An NDDS consists of vectors with values coming from a non-ordered discrete domain for each dimension. The BoND-tree index was recently developed to efficiently process box queries on a large dataset from an NDDS on disk. The original work of the BoND-tree focused on developing the index construction and query algorithms. No work has been reported on exploring efficient and effective up- date strategies for the BoND-tree. In this paper, we study two update methods based on two different strategies for updating the index tree in an NDDS. Our study shows that using the bottom-up update method can provide improved efficiency, comparing to the traditional top-down update method, especially when the number of dimensions for a vector that need to be updated is small. On the other hand, our study also shows that the two update methods have a comparable effectiveness, which indicates that the bottom-up update method is generally more advantageous.
Abstract. With the rapid growth of data nowadays, new types of database systems are emerging in order to handle big data, known as NoSQL databases. One type of NoSQL databases is graph database, which uses the graph model to present data and the relationships among data. Existing graph database systems are passive compared to traditional relational database systems that allow automatic event handling through active rules. This paper describes our approach of incorporating active rules into graph databases, allowing users to specify business logic in a declarative manner. The active system has been built on top of a passive graph database to react to events automatically. Our focus is to specify business rules declaratively rather than enforce integrity constraint using rules. Our system consists of a language framework and an execution model. Language specification will further be illustrated by on a motivating example that shows the use of rules in an application context. The paper also describes the design and implementation of the execution model in detail.
Abstract. The paper deals with problems that imbalanced and overlapping datasets often en- counter. Performance indicators as accuracy, precision and recall of imbalanced data sets, both with and without overlapping, are discussed and compared with the same performance indicators of balanced datasets with overlapping. Three popular classification algorithms, namely, Decision Tree, KNN (k-Nearest Neighbors) and SVM (Support Vector Machines) classifiers are analyzed and compared.
Abstract. This paper identifies challenges involved in the transformation of binary executables to run on bare machines such as PCs. It also addresses why we want to transform binary executables to run on bare machines. Text processing applications such as “vi,” “word,” and “notepad” are chosen to illustrate the need for transformation because these editors are the most commonly used across many operating system platforms, including Windows and Linux. They have much functionality in common to provide a general text processing application. Why not consolidate these standard functions and develop a generic text processing application? How do you make these editors without any platform dependencies? Transforming these applications to run on bare PCs or bare machines by using source or a binary level transformation will address these challenges. A binary transformation methodology described here lays the groundwork for further research in this area and provides some insight into the transformation process.
Abstract. Many statistical and machine learning models for prediction make use of historical data as an input and produce single or small numbers of output values. To forecast over many timesteps, it is necessary to run the program recursively. This leads to a compounding of errors, which has adverse effects on accuracy for long forecast periods. In this paper, we show this can be mitigated through the addition of generating features which can have an “anchoring” effect on recurrent forecasts, limiting the amount of compounded error in the long term. This is studied experimentally on a benchmark energy dataset using two machine learning models LSTM and XGBoost. Prediction accuracy over differing forecast lengths is compared using the forecasting MAPE. It is found that for LSTM model the accuracy of short term energy forecasting by using a past energy consumption value as a feature is higher than the accuracy when not using past values as a feature. The opposite behavior takes place for the long term energy forecasting. For the XGBoost model, the accuracy for both short and long term energy forecasting is higher when not using past values as a feature.
Abstract. Despite the fact that different techniques have been developed to filter spam, due to the spammer’s rapid adoption of new spam detection techniques, we are still overwhelmed with spam emails. Currently, machine learning techniques are the most effective ways to classify and filter spam emails. In this paper, a comprehensive comparison and analysis of the performance of various classification models on the 2007 TREC Public Spam Corpus are exhibited in various cases of without or with N- Grams as well as using separate or combined datasets. It is shown that the inclusion of the N-Grams in the pre-processing phase provides high accuracy results for classification models in most of the cases, and the models using the split approach with combined datasets give better results than models using the separate dataset.
Abstract. Stroke is a serious cerebrovascular condition in which brain cells die due to an abrupt blockage of arteries supplying blood and oxygen or when a blood vessel bursts or ruptures and causes bleeding in the brain. Because the onset of stroke is very sudden in most people, prevention is often difficult. In Japan, stroke is one of the major causes of death and is associated with high medical costs; these problems are exacerbated by the aging population. Therefore, stroke prediction and treatment are important. The incidence of stroke may be avoided by preventive treatment based on the patient’s risk of stroke. However, since judging the risk of stroke onset is largely dependent upon the individual experience and skill of the doctor, a highly accurate prediction method that is independent of the doctor’s experience and skills is necessary. This study focuses on a predictive method for subarachnoid hemorrhage, which is a type of stroke. LightGBM was used to predict the rupture of cerebral aneurysms using a machine learning model that takes clinical, hemodynamic and morphological information into account. This model was used to analyze samples from 338 cerebral aneurysm cases (35 ruptured, 303 unruptured). Simulation of cerebral blood-flow was used to calculate the hemodynamic features while the surface curvature was extracted from the 3D blood-vessel-shape data as morphological features. This model yielded a sensitivity of 0.77 and a specificity of 0.83.
Abstract. Word recognition is to identify words in images of printed or handwritten documents. It is especially challenging to recognize words from cursive handwriting documents. In this paper, we present a framework of using density-based clustering for word segmentation in printed or handwritten documents, including cursive handwriting. First, we performed various strategies for data preprocessing, including converting images to B/W images, adjusting the tilted images, and removing the background noises. K-means clustering and/or neighborhood density are used in finding parameters for the preprocessing steps. The preprocessing has shown to be very effective. For the word segmentation, we proposed density-based clustering to segment the words using multiple steps, including blurring, plotting, and clustering. We also developed a system for the framework, including preprocessing and clustering functionalities. Our approach works very well for printed documents. It works reasonably well for handwriting documents if words are relatively far from each other. The performance on handwriting documents can be further improved by using line segmentation.
Abstract. Freshmen who take an introductory computer programming course often ask their classmates for help. In some cases, they even copy each other’s programs. That is being considered as cheating. The problem of cheating in Computer Science students’ homework assignments so far has been handled mainly through administrative punishment of the cheaters. The success of such an approach depends to a large degree on the ability of the instructor to recognize the fact of cheating, which is a complicated task. With a large number of students taking the course, identifying the cheaters sometimes requires considerable time. The author of this paper suggests a way of solving the cheating problem by encouraging students’ cooperation rather than trying to fight it. He also suggests the way of changing the course grading policy emphasizes the importance of regular checking the students’ understanding of the course material.
Abstract. This paper presents a battery monitoring system using a multilayer neural network (MNN) for state of charge (SOC) estimation and state of health (SOH) diagnosis. In this system, the MNN utilizes experimental discharge voltage data from lithium battery operation to estimate SOH and uses present and previous voltages for SOC estimation. From experimental results, we know that the proposed battery monitoring system performs SOC estimation and SOH diagnosis well.
Abstract. Automating the process of detecting pavement cracks became a challenge mission. In the last few decades, many methods were proposed to solve this problem. The reason is that maintaining a stable condition of roads is essential for the safety of people and public properties. It was reported that maintaining one mile of roads in New York City in the USA might cost from four to ten thousand dollars. In this paper, we explore our initial idea of developing a lightweight Convolutional Neural Network (CNN or ConvNet) model that can be used to detect pavement cracks. The proposed CNN was trained using the AigleRN data set, which contains 400 images of road cracks of 480×320 resolution. The proposed lightweight CNN architecture performed a better fitting to the image data set due to the reduction in the number of parameters. The proposed CNN was capable of detecting cracks with a various number of sample images. We simulated the CNN architecture over different sizes of training/testing (i.e., 90/10, 80/20, and 70/30) data sets for 11 runs. The obtained results show that 90/10 data division for training and testing is outperformed other categories with an average accuracy of 97.27%.
Abstract. In this study, we build a system that is able to estimate the concentration degree of students while they are working with computers. The purpose of learning is to gain knowledge of a subject and to reach sufficient performance level about the subject. Concentration is the key in the successful learning process. But the concept of concentration includes some ambiguity and lacks the clear definition form an engineering point of view, and it is difficult to measure its degree by observation from outside. We in this paper begins with a discussion of the concept of concentration, and then a discussion of how to measure it by using standard devices and sensors. The proposed system investigates the facial images of students recorded by the PC webcams attached to the computers to infer their concentration degree. In this study, we define the concentration degree over a short time interval. The value takes continues value from 0 to 1, and is determined based on the efficiency of simple work performed over the interval. We convert the continuous values into three discrete values: low, middle and high. In the first approach in this study, we apply deep learning algorithm with only the facial images. In the next, we obtain the data of face moves as a set of time series, and run the learning algorithm using both of the data. We explain an outline of the methods and the system with several experimental results.
Abstract. “STEAMS” (Science, Technology, Engineering, Artificial Intelligence, Math, Statistics) approach was conducted to handle the missing value imputation of clustering Chocolate Science patterns. Hierarchical clustering and Dendrogram were utilized to cluster the commercial chocolate products into different product groups which can indicate the nutrition compositions and product health. To further handle the missing value imputation, Neural Network algorithm was utilized to predict the missed Cocoa% based on the other available Nutrition components. The Hyperbolic Tangent activation function was used to create the hidden layer with three nodes. Neural networks are very flexible models and tend to over-fit data. Definitive Screening Design (DSD) was conducted to optimize the Neural setting in order to minimize the over-fit concern. Both the Goodness Fit of Training set and Validation set can reach 99% R-Square. The Profiler Sensitivity analysis has shown that the Chocolate Type and Vitamin C are the most sensitive factors to predict the missed Cocoa%. The results also indicated that the “Fruit” Chocolate shall be added as the 4th Chocolate Type. The Neural Black-Box algorithm can reveal the hidden Chocolate Science and Product. This paper has demonstrated the power of using the Engineering DOE and Neural Network (AI) algorithm through “STEAMS”.
Abstract. Study of relationships established in social media is an emerging area of research. Online Social Network (OSN) is a collection of social entities carrying a lot of information that enriches the network. A structured modelling of the OSN dataset is required for informative knowledge mining and efficient Social Network Analysis (SNA). Graphical representation of data helps in analysing the structural properties, study of dense substructure, cluster formation and identifying the numerous types of entities exhibiting associations based on different activity fields. This paper discusses about various ways of graph theoretic representations of OSN including structure-based and content or interaction-based approaches. An integrated framework is proposed in this paper that learns from various user attributes and its associated interactions, network structure, timeline history, etc from a polarized OSN Graph for generating an efficient Friend Suggestion Recommender System.
Abstract. In this article we summarize at a high-level some of the popular smart technologies that may contrive many smart city ecosystems. More specifically we will emphasize the automation of various processes based on the extraction and analysis of digital media, through speech signals and images. Currently, there are many productized systems for personalization and recommendation of digital media content as well as various services in different areas. Most of them are developed with human-machine interaction in mind. Usually, this is done through a conventional use of a mouse and a keyboard. The user types their response manually, which is then recorded by the system for further analysis.
Abstract. Recommending and providing suitable learning materials to the learners according to their cognitive ability is important for effective learning. Assessing the cognitive load of a learner while studying a learning material can be helpful in assessing his/her intelligence and knowledge adapting abilities. This paper presents a real-time assessment method of the intelligence of students according to their instant learning skills. The proposed system can read the brain waves of students of different age groups at the time of learning and classify their instant learning skills using the cognitive score. Based on this, the learners are suggested suitable learning materials which maintain the learner in an overall state of optimal learning. The main issues concerning this approach are constructing cognitive state estimators from a multimodal array of physiological sensors and assessing initial baseline values, as well as changes in the baseline. These issues are discussed in a data processing block-wise structure. Synchronization of different data streams and feature extraction and formation of a cognitive state metric by classification/clustering of the feature sets are done. The results demonstrate the efficiency of using cognitive score in RTLCS in the identification of instant learning abilities of learners.
Abstract. Software engineering management (SEM) involves as activities to planning, coordinating, measuring, monitoring, and controlling. Since maximizing productivity is related to the highest value with lowest resource consumption, a factor taken into account in these activities is productivity, which includes total effort used to satisfy the exit criteria of a software process. In recent years, productivity has been studied from several points of view; therefore, the contribution of this study is analyze its variability by classifying the software projects based on their size measure, type of development, development platform, and programming language type such that the SEM activities are more objective. In this study, data sets of software projects were selected from the International Software Benchmarking Standards Group Release 2018 for performing the following three experiments between types of development, and by type of size measure: (1) independent of both the type of development platform and of the programming language type, (2) dependent of the type of development platform, and independent of the programming language type, and (3) dependent of both the type of development platform and of the programming language type. Results show the statistically significant differences by experiment.
Abstract. Foreign metal removal is a key process of quality control in the food and pharmaceutical industries and can possibly be achieved using a magnet separator. Typically, magnetic separators are installed at existing production facilities by remodeling because they have the ability to deal with the problems that arise in production facilities. However, when measuring for remodeling, problems such as measurement error, forgotten measurements, change in location, detail proposal changes, or impossibility to measure occur because of complex, distorted or dented shapes, dimensional inaccuracy, and the surroundings. Additionally, the magnet separators designed to fit an existing production have problems in that the dimensions differ from those of the existing facilities, and deficiency is expected in the performance. To solve these problems using a non- conventional method, we developed an adaptive interface design system that combines high accuracy measurement by means of 3D scan to reproduce the existing production facilities as distorted shape and dented shape by reverse engineering, and the optimized finite element method analysis for magnet field to satisfy an expected performance of the surface flux density, and inspect the shape of the design, dimensions, and performance, using computer aided engineering.
Abstract. Petri nets (PNs) are a form of directed graph that can be used to model and simulate systems. They are very useful tool for developing and analyzing algorithms, prior to implementation. Adding the component of time, allows for systems with prioritized actions to be modeled more effectively. This paper assesses a Texas Hold’em algorithm using Petri nets and uses them to develop an improved version of this algorithm. Both are implemented in Python scripts to obtain the results to show which is more effective.
Abstract. In this paper we propose new real time architectures for monitoring underwater oil and gas pipelines by using underwater wireless sensor network (UWSN). New monitoring architectures for underwater oil/gas pipeline inspection system combine a real time UWSN with nondestructive In Line Inspection (ILI) technology. These architecture will help in reducing or detecting the pipeline’s defects such as cracks, corrosions, welds, pipeline’s wall thickness ...etc by improving data transfer from the pipeline to the processor to extract useful information and deliver it to the onshore main station. Hence, decreasing delays in default detection.
Abstract. To facilitate the creation of a robotic soccer team, a robust kicking strategy and algorithm must be developed. Through the use of Fuzzy Petri nets, a strategy was made and developed into an algorithm to produce a 95% success rate. Image processing and recognition was used to implement this algorithm onto NAO robots.
Abstract. In recent years graphs with massive nodes and edges have become widely used in various application fields, for example, social networks, web mining, traffic on transport, and more. Several researchers have shown that reducing the dimensions is very important in analyzing extensive graph data. They applied a variety of dimensionality reduction strategies, including linear methods or nonlinear methods. However, it is still not clear to what extent the information is lost or preserved when these techniques are applied to reduce the dimensions of large networks. In this study, we measured the utility of graph dimensionality reduction, and we proved when using the very recently suggested method, which is HDR to reduce dimensional for graph, the utility loss will be small compared with popular linear techniques, such as PCA, LDA, FA, and MDS. We measured the utility based on three essential network metrics: Average Clustering Coefficient (ACC), Average Path Length (APL), and Average Betweenness (ABW). The results showed that HDR achieved a lower rate of utility loss compared to other dimensionality reduction methods. We performed our experiments on the three undirected and unweighted graph datasets.
Abstract. The usage of the appropriate routing protocol algorithm in wireless sensor network (WSN) research is an important issue. Depending upon the deployed network topology, routing protocols can be classified in many ways including hierarchical cluster-based routing protocol. The hierarchical cluster-based routing protocol is pursuing an energy- efficient way to reduce the overall energy consumption within the monitoring cluster area by performing data aggregation along with data fusion. The objective of this study is to present a state-of-the-art survey on selected hierarchical cluster-based routing protocols in WSNs. In this paper, hierarchical cluster-based routing protocol algorithms are reviewed and compared with their advantages and disadvantages along with their main contributions. Additionally, each hierarchical cluster-based routing protocol algorithm is analyzed by comparing the measurement parameters of their performance.
Abstract. The design of multiuser OFDM underwater acoustic (UWA) communication systems is very challenging due to the time-varying and frequency selective fading of UWA channels. The key for ensuring a reliable optimum transmission for every user is the suitable resource assignment for each. This paper proposes a new adaptive channel- matched OFDM scheme (CM-OFDM) to overcome the performance degradation caused by the frequency selective fading across the OFDM subcarriers. In the proposed scheme, the OFDM subcarriers are sorted depending on their corresponding channel gains. Then the resource assignment to the different users is accomplished according to their required quality of service (QOS). The user with the highest QOS is assigned the best subcarriers and the other users are similarly assigned the remaining subcarriers. This optimized resource assignment technique guarantees enhanced performance with no need of increasing the transmitted power or changing the modulation schemes. The performance of the proposed technique is investigated and is compared with uniform and random subcarriers’ assignment methods used for multiuser OFDM systems. The simulations are performed for a multipath frequency selective UWA channel model. The simulation results clearly show the advantages of CM-OFDM scheme for the users with high QOS on the expense of a slight degradation in the performance for the other user with lower QOS.
Abstract. Capsule networks (CapsNet) are the next generation of neural networks. CapsNet can be used for classification of data of different types. Today’s General Purpose Graphical Processing Units (GPGPUs) are more capable than before and let us train these complex networks. However, time and energy consumption remains a challenge. In this work, we investigate if skipping trivial operations i.e. multiplication by zero in CapsNet, can possibly save energy. We base our analysis on the number of multiplications by zero detected while training CapsNet on MNIST and Fashion- MNIST datasets.
Abstract. In the recent years, people are becoming more dependent on the Internet as their main source of information about healthcare. A number of research projects in the past few decades examined and utilized the internet data for information extraction in healthcare including disease surveillance and monitoring. In this paper, we investigate and study the potential of internet data like internet search keywords and search query patterns in the healthcare domain for disease monitoring and detection. Specifically, we investigate search keyword patterns for disease outbreak detection. Accurate prediction and detection of disease outbreaks in a timely manner can have a big positive impact on the entire health care system. Our method utilizes machine learning in identifying interesting patterns related to target disease outbreak from search keyword logs. We conducted experiments on the flu disease, which is the most searched disease in the interest of this problem. We showed examples of keywords that can be good predictors of outbreaks of the flu. Our method proved that the correlation between search queries and keyword trends are truly reliable in the sense that it can be used to predict the outbreak of the disease.
Abstract. The fatigue-related accident is increasing due to long work hours, medical reasons, and age that decrease response time in a moment of hazard. One of drowsiness and fatigue visual indicators is excessive yawning. In this paper, a non-optical sensor presented as a car dashcam that is used to record driving scenarios and imitates real-life driving situations such as being distracted or talking to a passenger next to the driver. We built a deep CNN model as the classifier to classify each frame as a yawning or non- yawning driver. We can classify the drivers' fatigue into three levels, alert, early fatigue and fatigue based on the judgement of the number of yawns. Alert level means when the driver is not yawning, while, early fatigue is when the driver yawns once in a minute. Fatigued is when the driver yawns more than once in a minute. An overall decision is made by analyzing the source score and the condition of the driver's fatigue state. The robustness of the proposed method was tested under various illumination contexts and a variety of head motion modes. Experiments are conducted using YAWDD dataset that contains 322 subjects to show that our model presents a promising framework to accurately detect drowsiness level in a less complex way.
Abstract. Science museums with hands-on and interactive exhibits are on the rise. As museums grow, the need arises to have an online platform to allow visitors to continue their ex- perience beyond a day visit. In this paper, we first provide a brief survey of techniques for building scalable cloud native software frameworks. In order to achieve low cost and persist user data, we built a django application using Heroku and postgres. This platform can be scaled horizontally on-demand to handle highly variant user traffic for augmenting the museum experience. With a focus on educational experiences, a participant’s progress on activities at the museum are saved through an API we built and can be viewed on a website. Different activities at the museum generate data for the API which can be viewed anywhere. Simulated data was loaded into our framework to validate the efficacy of our solution. Future testing is outlined in collaboration with the Fleischmann Planetarium through a trial experiment with museum visitors.
Abstract. Age estimation has lots of real-world applications, such as security control, biometrics, customer relationship management, entertainment and cosmetology. In fact, facial age estimation has gained wide popularity in recent years. Despite numerous research efforts and advances in the last decade, traditional human age-group recognition with the sequence of 2D color images is still a challenging problem. The goal of this work is to recognize human age-group only using depth maps without additional joints information. As a practical solution, we present a novel representation of global appearance of aging-effect such as wrinkles’ depth. The proposed framework relay, first-of-all, on an extended version of Viola-Jones algorithm for face and region of interest (most affected by aging) extraction. Then, the 3D histogram of oriented gradients is used to describe local appearances and shapes of the depth map, for more compact and discriminative aging effect representation. The presented method has been compared with the state-of-the-art 2D-approaches on public datasets. The experimental results demonstrate that our approach achieves a better and more stable performances.