Anna Georgiadou, Spiros Mouzakitis and Dimitrios Askounis, National Technical University of Athens, Decision Support Systems Laboratory, Iroon Poly-techniou 9, 15780 Zografou, Greece
This paper outlines the design and development of a survey targeting the cyber-security culture assessment of critical infrastructures during the COVID-19 crisis, when living routine was seriously disturbed and working real-ity fundamentally affected. Its foundations lie on a security culture framework consisted of 10 different security dimensions analyzed into 52 domains exam-ined under two different pillars: organizational and individual. In this paper, a detailed questionnaire building analysis is being presented while revealing the aims, goals and expected outcomes of each question. It concludes with the survey implementation and delivery plan following a number of pre-survey stages each serving a specific methodological purpose.
cybersecurity culture, assessment survey, COVID-19 pandemic, criti-cal infrastructures.
HuiHui He and YongJun Wang, College of Computer Science, National University of Defense Technology, ChangSha, China
Due to the interactivity of stateful network protocol, network protocol fuzzing has higher blindness and lower testcase validity. The existing blackbox-based fuzzing has the disadvantages of high randomness and blindness. The manual description of protocol specification which requires more expert knowledge, is tedious and does not support the protocol without public document, which limits the effect of current network protocol fuzzer. In this paper, we present PNFUZZ, a fuzzer that adopts the state inference based on packet clustering algorithm and coverage oriented mutation strategy. We train a clustering model through the target protocol packet, and use the model to identify the server’s protocol state, thereby optimizing the process of testcase generation. The experimental results show that the proposed approach has a certain improvement in fuzzing effect.
Fuzzing, Software Vulnerabilities, Network Protocol, Network Packet Clustering.
Hager Ali Yahia1, Mohammed Zakaria Moustafa2, Mohammed Rizk Mohammed3, Hatem Awad Khater4, 1Department of Communication and Electronics Engineering, ALEXANDRIA University, Alexandria, Egypt, 2Department of Electrical Engineering (Power and Machines Section) ALEXANDRIA University, Alexandria, Egypt, 3Department of Communication and Electronics Engineering, ALEXANDRIA University, Alexandria, Egypt, 4Department of Mechatronics, Faculty of Engineering, Horus University, Egypt
A support vector machine (SVM) learns the decision surface from two different classes of the input points. In many applications, there are misclassifications in some of the input points and each is not fully assigned to one of these two classes. In this paper a bi-objective quadratic programming model with fuzzy parameters is utilized and different feature quality measures are optimized simultaneously. An a-cut is defined to transform the fuzzy model to a family of classical bi-objective quadratic programming problems. The weighting method is used to optimize each of these problems. An important contribution will be added for the proposed fuzzy bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The experimental results, show the effectiveness of the a-cut with the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
Support vector machine (SVMs), Classification, Multi-objective problems, Weighting method, fuzzy mathematics, Quadratic programming, Interactive approach.
Divya S L, Department of Mathematics, Maharaja’s College, Ernakulam, Kerala, India
In this paper, we present an application of new exponential intuitionistic fuzzy entropy that we had proposed. The result obtained by using the new entropy measure is then compared with some of the existing intuitionistic fuzzy entropy measures. It is found that our measure is more suitable in certain situations.
Fuzzy Set, Intuitionistic Fuzzy Set, Fuzzy Entropy, Intuitionistic Fuzzy Entropy, Exponential Intuitionistic Fuzzy Entropy.
Esra Çakır and Ziya Ulukan, Department of Endustrial Engineering, Galatasaray University, Istanbul, Turkey
Since COVID-19 has become a pandemic, education has been interrupted in many countries, and training has been temporarily continued on online platforms. But it is difficult to determine which of the many existing web (or video) conferencing software is more suitable for class education. The aim of this study is to sort these platforms according to the criteria determined by experts and select the best one among ten options by using interval valued fuzzy parameterized intuitionistic fuzzy soft sets.
Fuzzy multi-criteria decision making, interval valued fuzzy sets, intuitionistic fuzzy soft sets, web conferencing software selection.
Nel R. Panaligan1 and Patrick Angelo P. Paasa2, 1Information Technology Department, Southern Philippines Agri-Business and Marine and Aquatic School of Technology, Malita, Davao Occidental, Philippines, 2Computer Science Division, Ateneo de Davao University, Davao City, Philippines
The development of automatic fish counters has been driven by the need for accurate, long-term and cost-effective counting and in terms of object recognition in line with advancement of aquaculture in the country. Non-invasive methods of fish counting are ultimately limited by the properties of the immerging technologies like when candidates for counting are transparent and or small (Bangus Fry). Image processing is one of the most modern approach in automating the counting process. The main objective of the study is to evaluate three image segmentation algorithms in an image (2D image of bangus fry) with touching or overlapping fry whether or not they are capable of segmenting tiny 2 weeks old bangus fry’s’ in an image. The study will be evaluating three (3) Image segmentation algorithms with different methods applied in each, (1) Watershed Algorithm, (2) Hough Transform, (3) Concavity Analysis. This study involves 4 basic steps used in image processing; Image acquisition, Image Pre-Processing, Image segmentation, and Object counting. Result shows that the second method of the Watershed Algorithm which identifies the Local Maxima and the Distance transform performs best with the other algorithm with an accuracy rate of 86.47% and 0 false detection in an experimental data of four sets of 2D image ranging from 100, 200, 300, and 400 bangus fry per test image.
Image Segmentation Algorithms, Watershed Algorithm, Hough Transform, Concavity Analysis, Evaluation.
Marie-Anne Xu1 and Rahul Khanna2, 1Crystal Springs Uplands School, CA, USA, 2University of Southern California, CA, USA
Recent progress in machine reading comprehension and question-answering has allowed machines to reach and even surpass human question-answering. However, the majority of these questions have only one answer, and more substantial testing on questions with multiple answers, or multi-span questions, has not yet been applied. Thus, we introduce a newly compiled dataset consisting of questions with multiple answers that originate from previously existing datasets. In addition, we run the top BERT-based models pre-trained for question-answering on this dataset to evaluate their reading comprehension abilities. Among the three types of BERT-based models, RoBERTa exhibits the highest consistent performance. We find that the models perform similar on this new, multi-span dataset (21.492% F1) compared to the multi-span subset of the source dataset (25.0% F1). We conclude that our similarly high model evaluations indicate that these models are indeed capable of adjusting to answer questions that require multiple answers. We hope that our findings will assist future development in question-answering and improve existing question-answering products and methods.
Natural Language Processing, Question Answering, Machine Reading Comprehension.
Tebatso Moapel1, Sunday Ojo2 and Oludayo Olugbara3, 1Department of Computer Science, University of South Africa, Roodepoort, Florida Park, 2Inclusive African Indigenous Language Technology Institute, Pretoria, Gezina, 3Department of Computer Science, Durban University of Technology, Durban, Greyville
Setswana, an African Bantu language in the Sotho group, is one of the eleven official languages of South Africa. As with other natural languages, Setswana is ambiguous, meaning there are lexical units in Setswana that embodies multiple senses or meaning. This poses a challenge when developing computational Natural Language Processing (NLP) tools such as Machine Translation, Information Extraction, Document Analysis, Word Prediction tools, etc. This paper provides a taxonomy of Setswana ambiguities, a step towards developing Word Sense Disambiguator (WSD) for Setswana. The paper further presents ambiguity challenges faced in computational linguistics when developing linguistic analysis tools such as Part of Speech (POS) Taggers and language translation.
Language Ambiguities, Natural Language Processing, Machine Translation.
Simon Wild and Soyhan Parlar, University of Applied Sciences and Arts Northwestern Switzerland, Olten, Switzerland
This paper analyses how the required skills in a job post can be extracted. With an automated extraction of skills from unstructured text, applicants could be matched more accurately and search engines could provide better recommendations. The problem is optimised by classifying the relevant parts of the description with a multinomial naïve Bayes model. The model identifies the section of the unstructured text in which the requirements are stated. Subsequently, a named entity recognition (NER) model extracts the required skills from the classified text. This approach minimises the false positives since the data which is analysed is already filtered. The results show that the naïve Bayes model classifies up to 99% of the sections correctly, and the NER model extracts 65% of the skills required for a position. The accuracy of the NER model is not sufficient to be used in production. On the validation set, the performance was insufficient. A more consistent labelling guideline is needed to be in place, and more data is necessary to be annotated to increase the performance.
Named entity recognition, naïve Bayes, job posting, information retrieval, natural language processing, unstructured text.
Ali M. Alagrami1 and Maged M. Eljazzar2, 1Department of Computer Science, University of Venice, Italy, 2Faculty of Engineering, Cairo University, Egypt
Tajweed is a set of rules to read the Quran in a correct Pronunciation of the letters with all its Qualities, while Reciting the Quran. which means you have to give every letter in the Quran its due of characteristics and apply it to this particular letter in this specific situation while reading which may differ in other times. these characteristics include melodic rules like where to stop and for how long, when to merge two letters in pronunciation or when to stretch some or even when to put more strength on some letters over other. Most of the papers focus mainly on the main recitation rules and the pronunciation but not (Ahkam AL Tajweed) which give different rhythm and different melody to the pronunciation with every different rule of (Tajweed). Which is also considered very important in Reading the Quran. In this paper we introduce a new approach to detect Quran Recitation Rules (Tajweed) by using support vector machine and threshold scoring system. and we discuss in detailed how the complete system, from the pre-processing stage to feature Extraction in which we used Filter to classification.
SVM, Machine learning , Quran Recitation, Tajweed, Quranic Recitation.
Shiyuan Zhang1, Evan Gunnell2, Marisabel Chang2, Yu Sun2, 1Irvine, CA 92620, 2Department of Computer Science, California State Polytechnic University, Pomona
As more students are required to have standardized test scores to enter higher education, developing vocabulary becomes essential for achieving ideal scores. Each individual has his or her own study style that maximizes the efficiency, and there are various approaches to memorize. However, it is difficult to find a specific learning method that fits the best to a person. This paper designs a tool to customize personal study plans based on clients’ different habits including difficulty distribution, difficulty order of learning words, and the types of vocabulary. We applied our application to educational software and conducted a quantitative evaluation of the approach via three types of machine learning models. By calculating crossvalidation scores, we evaluated the accuracy of each model and discovered the best model that returns the most accurate predictions. The results reveal that linear regression has the highest cross validation score, and it can provide the most efficient personal study plans.
Machine learning, study plan, vocabulary.
Cristian Irimita1,2 and Marius Nedelcu1,2, 1Technology Department, Orange Romania, Bucharest, Romania, 2University Politehnica of Bucharest, Telecommunications Department, Bucharest, Romania
This paper aims to present a new network architecture, namely slice networks. We are heading towards the implementation of the first 5G networks and the provision of the first services over this new infrastructure. Through slice networks, these new services and the traditional ones will be implemented with high efficiency. We discuss topics such as the origination of dedicated virtual networks for every implementation requirement, service quality satisfaction and the user experience (high transfer speeds, low latencies). We will also elaborate the need for the network slicing concept, types of slicing, selection mode, business models implemented through the new architecture and requirements in terms of virtualization, orchestration, management and automation capabilities. The techniques used in the laboratory for the slices implementation will be highlighted, such as the management and orchestration realized with the support of OSM (Open Source MANO), virtual infrastructure realization (Openstack) and the process automation realized through ONAP.
5G, slicing, NFV, SDN, scalability, network automation.
Evan R.M. Debenham and Roberto Solis-Oba, Department of Computer Science, The University of Western Ontario, Canada
In many computer games checking whether one object is visible from another is very important. Field of Vision (FOV) refers to the set of locations that are visible from a specific position in a scene of a computer game. Once computed, an FOV can be used to quickly determine the visibility of multiple objects from a given position. This paper summarizes existing algorithms for FOV computation, describes their limitations, and presents new algorithms which aim to address these limitations. We first present an algorithm which makes use of spatial data structures in a way which is new for FOV calculation. We then present a novel technique which updates a previously calculated FOV, rather than re-calculating an FOV from scratch. We compare our algorithms to existing FOV algorithms and show that they provide substantial improvements to running time.
Field of Vision (FOV), Computer Games, Visibility Determination, Algorithms.
John Hawkins, Trans-ai Research Group, Australia http://getting-data-science-done.com
Prioritization of machine learning projects requires estimates of both the potential ROI of the business case and the technical difficulty of building a model with the required characteristics. In this work we present a technique for estimating the minimum required performance characteristics of a predictive model given a set of information about how it will be used. This technique will result in robust, objective comparisons between potential projects. The resulting estimates will allow data scientists and managers to evaluate whether a proposed machine learning project is likely to succeed before any modelling needs to be done. The technique has been implemented into the open source application MinViME (Minimum Viable Model Estimator) which can be installed via the PyPI python package management system, or downloaded directly from the GitHub repository. Available at https://github.com/john-hawkins/MinViME
Asmaa Hadane1,2, Saad Benjelloun1, Lhachmi Khamar1,3, 1MSDA, Mohammed 6th Polytechnic University, Benguerir, Morocco, 2LGCE, EST Salé, Mohammed V University, Rabat, Morocco, 3LIPIM, ENSA Khouribga, Sultan Moulay Sliman University, Beni Mellal, Morocco
Many industrial processes require continuous agitation to avoid fouling problems and sludge deposition. Hence these processes are conducted in stirred tanks. To investigate the well mixedness in tanks, multifluid CFD studies can be conducted. We present here an example of a desupersaturation reactor CFD simulation, based on the Euler-Euler modeling approach, and considering a multiphase flow involving a liquid phase (phosphoric acid) and a poly-dispersed solid phase, i.e. a sludge with different sizes where each size range is considered as a separate phase. To assess the mixture homogeneity, we evaluate the solid suspension in the desupersaturation reactor following conventional methods and two new proposed methodologies: the first approach is to evaluate the suspension quality in the mixing system by compartment and the second consists on the assessment of the uniform convergence of the solid concentration. We show that our methodologies permit a better investigation of the optimal operating conditions for homogenization, as well as clearly redefining quantities such as the Number of Just Suspension (Njs).
Modeling, Simulation, CFD, Solid dispersion, Homogenization.
Somi Kolita, Research Scholar, Department of IT, The Assam Kaziranga University, Jorhat, Assam, India, Purnandu Bikash Acharjee, Assistant Professor, Department of IT, The Assam Kaziranga University, Jorhat, Assam, India
Speech analysis is essential for identification of emotion type of a speaker that has different kind of parameters like fundamental frequency, intensity, formant, duration, MFCC etc. In speech synthesis system, syllabification is considered as the backbone. The rules of this process varies different kind of languages. The main objective of this work is the syllable based intonation analysis. In this paper, we can design the structure of syllabic that can made the syllables of Assamese words, which will be introduced in later into speech synthesis system.
Intonation, Syllabification, fundamental frequency, duration.
Désiré Guel1, Boureima Zerbo2, Jacques Palicot3 and Oumarou Sié1, 1Department of Computer Engineering, Joseph Ki-Zerbo University, Burkina Faso, 2Thomas-Sankara University, Burkina Faso, 3CentraleSupélec/IETR, Campus de Rennes, France
In recent past years, PAPR (Peak-to-Average Power Ratio) of OFDM (Orthogonal Frequency- Division Multiplexing) system has been intensively investigated. Published works mainly focus on how to reduce PAPR. Since high PAPR will lead to clipping of the signal when passed through a nonlinear amplifier. This paper proposes to extend the work related to "Gaussian Tone Reservation Clipping and Filtering for PAPR Mitigation" which has been previously published. So, in this paper, we deeply investigate the statistical correlation between PAPR reduction, and the distortion generated by three (3) adding signal techniques for PAPR reduction. Thereby, we first propose a generic function for PAPR reduction. Then, we analyse the PAPR reduction capabilities of each PAPR reduction technique versus the distortion generated. The signal-to-noise-and-distortion ratio (SNDR) metric is used to evaluate the distortion generated within each technique by assuming that OFDM baseband signals are modelled by complex Gaussian processes with Rayleigh envelope distribution for a large number of subcarriers. The results related to one of the techniques is proposed in the first time in this paper, unlike those related to the other two PAPR reduction techniques where the studies were already published. Comparisons of the proposed approximations of SNDR with those obtained by computer simulations show good agreement. An interesting result highlighted in this paper is the strong correlation existing between PAPR reduction performance and distortion signal power. Indeed, the results show that PAPR reduction gain increases as the distortion signal power increases. Through these 3 examples of PAPR reduction techniques; we could derive the following conclusion: in an adding signal context, the adding signal for PAPR reduction is closely linked to the distortion generated, and a trade-off between PAPR-reduction and distortion must be definitely found.
Orthogonal Frequency Division Multiplexing (OFDM), Peak-to-Average Power Ratio (PAPR), signal-to-noise-and-distortion ratio (SNDR), Adding Signal Techniques.
Copyright © CSEA 2020