GET THE APP

Special Issue on Artificial Intelligence |
..

Open Access

Special Issue on Artificial Intelligence

Value Added Abstracts

Pages: 1 - 1

Automatic speech recognition for French regional accents in field of French media

Matthieu Parmentier

Standard French means French pronounced without any regional accent and whose syntax, morphology and spelling are described in dictionaries, grammar books and textbooks. Although standard French is taught at school and used in written communication, administrative documentation and Radio/TV national channels, most of regional and local Radio/TV channels as well as specific shows promoting touristic areas can typically carry out regional French accents. In this work, we focus on the problem of speech recognition service for the French media. Our goal is to build useful resources to extend language models that can be used in speech recognition APIs. Over the past few years, the APIs for automated speech recognition (ASR) are playing an increasing crucial role for a variety of media applications. In this paper, we present a lightweight open source framework STT-FTV that allows users to easily connect 4 different ASR tools to be trained with different French regional accents. The speech corpus has been collected and transcribed from various TV and radio contents especially fit in order to represent the major differences with the standard French. This STT-FTV system offers a relative reduction of the Word Error Rate in the context of regional accent transcription.

Value Added Abstracts

Pages: 2 - 2

A Rule-Based Two-Level Classification Approach for Recognition of Machining Features from 3D Geometric Models

Alan C. Lin and Riona Ihsan Media

In this paper, a rule-based approach is presented to enhance the capability of recognizing both intersecting and isolated features with form variation. Features are recognized and classified into a two-level classification. In the first level, edge and loop types are utilized to categorize the features into two different groups, namely single-entry feature (SEF) and multiple-entry feature (MEF). In the second level classification, hole and pocket features can be recognized by either convex internal loops or concave external loops. Meanwhile, step and slot features can be recognized by hybrid loops and the total number of tool accessible directions. Besides, the transitional features are categorized into simple and rounded features according to surface types and the number of face sets. Non-planar surfaces including cylinder, sphere and cone are also examined. Finally, special machining features such as T-slot and dovetail slot are also evaluated by means of pre-defined rules.

The proposed approach has been implemented using NX Open in Siemens NX as the platform for system development. Five real industrial parts are used as test examples. The result shows that eighteen feature types are successfully recognized to accommodate intersecting and isolated machining features with variable topology.

Value Added Abstracts

Pages: 3 - 3

Multiple Material Deposition (MMD) system for applications to hazardous environments and disaster scenarios

Alec John Burns

The largest risk in the aftermath of environmental disasters is typically the damage that has occurred to structures. A material extrusion device capable of traversing highly irregular terrain, able to operate in hazardous or difficult to reach locations, and repair damage would largely mitigate many of the risks for survivors and relief workers.

 

In this work we present a novel solution to the posed challenge, the combination of a Mobile Tracked Rover (MTR) with an Multiple Material Deposition (MMD) system. The MMD system adds an additional dimension to the manoeuvrability and utility of the MTR in extreme environments. With the differential tracked drive the rover inherits a stable base, but is typically limited in extreme environment manoeuvrability, in comparison to a rotorcraft drone with poor static stability but uninhibited manoeuvrability. The MMD system allows the rover to deposit expansive material for real-time climbing support, allowing it to climb or bridge previously insurmountable obstacles. The MMD also allows the MTR to deposit material accurately, essential for effective damaged structure reinforcement where material will need to be strategically placed.

 

The MMD has been developed to support deposition of a number of materials, and the platform has been optimised in simulated disaster scenarios. One such material is polyurethane foam, the characteristics of which allow easy deposition, expansion and solidification in short enough time to enable the rover to use the substance as a real-time support for climbing. The high compressive strength of the polyurethane foam is sufficient for temporary support structures, or structural reinforcement allowing the MTR to operate in support of disaster relief workers. An additional feature of the MMD is its ability to deposit clay, making it able to repair damaged masonry, further improving the MTR utility in more general maintenance and repair scenarios.

Value Added Abstracts

Pages: 4 - 4

A recommender system for job seekers to show up companies based on their psychometric preferences and Company Sentiment Scores.

Amanulla Ashraff

The increasing importance of the web as a medium for electronic and business transactions has served as a catalyst or rather a driving force for the introduction and implementation of recommender systems. Recommender Systems plays a major role in processing and analyzing thousands of data rows or reviews and help humans make a purchase decision of a product or service. It also has the ability to predict whether a particular user would rate a product or service based on the user’s profile behavioral pattern. At present, Recommender Systems are being used extensively in every domain known to us. They are said to be ubiquitous. However, in the field of recruitment it’s not being utilized exclusively.

A recent statistics shows an increase in staff turnover which has negatively impacted the organization as well as the employee. The reasons being company culture, working flexibility (work from home opportunity), no learning advancements and pay scale. Further investigations revealed that there are lacking guidance or support which helps a job seeker find the company that will suit him best and though there’s information available about companies, job seekers can’t read all the reviews by themselves and get an analytical decision.

Most research up to this point has focused on recommending jobs accurately based on collaborative filtering, knowledge filtering and hybrid approaches but no research has been done in this domain where companies are suggested to job seekers and no such combinational approach has been used elsewhere.

In this paper, a new approach has been proposed to study the available review data on IT companies (score their reviews based on user review sentiments) and collect information on job seekers including their Psychometric evaluations and presents the job seeker with useful information or rather outputs on which company is most suitable for the job seeker.

The theoretical approach, Algorithmic approach and the importance of a system like this will be discussed in this paper.

Value Added Abstracts

Pages: 5 - 5

Custom datasets from 0.-A brief discussion on how to acquire and prepare data for training deep learning neural networks. Whether for sorting, recognizing objects or targeting.

André Costa

A quick chat about creating custom datasets from absolute 0.  The true path between the stones. (Focusing on data acquisition and preparation. Without going into details of neural network training). A summary of what I wish someone had talked to me when I was starting ... It will be addressed from the collection of images. Criteria for image acquisition, such as ambient light control, dark areas in the image. Existence of hotspots. The importance of highlighting the points of the scene with really relevant data. Possible improvements in the quality of images collected using optical filters to alleviate major problems. Such as eliminating light reflections, enhancing or eliminating colors. What are the main filters that should always be in our toolbox? Imaging from special cameras such as infrared cameras, multispectral cameras and thermal cameras. The least I need to know about DICOMS. Do I need special and expensive software to deal with dicoms? Quantities of images to train a network. - "Why do I need two thousand images if I trained my network with only one image?". Data Standardization (Microsoft COCO, Pascal VOC). Main tools (free and paid) to annotate images for object recognition and segmentation. Ok. I prepared my dataset. And now? How do I do to train? Some frameworks for network training. Image classification classification, recognition and segmentation of objects in images. Gpus. What is this? Do I need this to train? And after training? Is it mandatory to use gpus to run my project? Now I know how to train a network! I already know everything! Am I ready for the market? Only not ....

Value Added Abstracts

Pages: 6 - 6

Real-time Tools for Exploring the English Language

Andrew Ernest Ritz

Teaching conversational English requires tools that support effective communication and are usable by both students and teachers alike. Research shows, that meaning-focused instruction is effective in developing a student’s oral skills, but to achieve high levels of linguistic and socio-linguistic competency, a student must also concentrate on form. A wide range of tools are in fact needed. Artificial Intelligence (AI) can help with creating the necessary tools because a large number of linguistic topics are under active investigation by the AI community. For instance, tools that focus on fluency, discrete grammar points, meaning and communication style can already be constructed with existing technology. In addition to the breadth of tools needed, classroom-based tools must work in real-time and be perceived as an essential part of the conversation. Something else AI can help with. The concept of real-time tools for exploring the English language is a first step towards developing such a set of tools. This talk describes the tools created so far, the AI technology that underlies their operation and the problems met while utilizing software provided by the AI community. Creating applications for use in the classroom is challenging because limits cannot be placed on the subject matter. But available hardware is clearly limited. Looking for cost-effective hardware solutions led to the development of an SBC cluster for experimenting with the building of real-time tools. This cluster is also described along with the software created for experimenting with their creation.

Value Added Abstracts

Pages: 9 - 9

Data and Artificial Intelligence Strategy: A Conceptual Enterprise Big Data Cloud Architecture to Enable Market-Oriented Organisations

Caio Moreno

Market-Oriented companies are committed to understanding both the needs of their customers, and the capabilities and plans of their competitors through the processes of acquiring and evaluating market information in a systematic and anticipatory manner. On the other hand, most companies in the last years have defined that one of their main strategic objectives for the next years is to become a truly data-driven organisation in the current Big Data context. They are willing to invest heavily in Data and Artificial Intelligence Strategy and build enterprise data platforms that will enable this Market-Oriented vision. In this paper, it is presented an Artificial Intelligence Cloud Architecture capable to help global companies to move from the use of data from descriptive to prescriptive and leveraging existing cloud services to deliver true Market-Oriented in a much shorter time (compared with traditional approaches).

Value Added Abstracts

Pages: 7 - 7

A Data-Based Analysis of Super-State Hidden Markov Models in Non-Intrusive Load Monitoring

Arooj Arif

The Energy Crisis is costing economies worldwide and depleting natural resources at an alarming rate. Conservation of energy has become a point of concern particularly in large consumption sectors. Load monitoring is an effective solution to conserve electrical energy at a large-scale, cost-effectively. While many Non-intrusive Load Disaggregation techniques are in practice, some techniques have been known to outperform others. This paper discusses the most optimum unsupervised learning technique for load disaggregation in Non-intrusive Load Monitoring for Type-I and Type-II loads; catering to factors such as residential requirements, practicality in real time, ease of integration, scalability, feature extraction, reduced training and increased accuracy to meet the need of the time. A review of various implemented models of ‘Super-State Hidden Markov Models’ in NILM has been provided, to conclude that this methodology is the most efficient approach to load disaggregation in the provided case scenario that prevails on a large scale. A comparison with recent state-of-the-art work has also been provided to show that this approach out performs conventional approaches in terms of the above-mentioned factors (catered to by super-state HMMs).

Value Added Abstracts

Pages: 8 - 8

DETECTION AND POSE ESTIMATED GRASPING OF AN INDUSTRIAL ROBOT IN BIN PICKING OPERATIONS

Avinash Sen

The technique used by a robot to grab objects that are randomly placed inside a box or a pallet is called bin picking. Bin picking has evolved greatly over the years due to tremendous strides empowered by advanced computer vision technology, software development and gripping solutions. However,  the  creation  of  a  versatile  system,  capable  of  collecting  any  type  of  object  without deforming it, regardless of the disordered environment around it, remains a challenge. In this thesis a  solution  for  this  problem  that  is  based  on  learning  the  appearance  model  using  convolutional neural networks (CNN) is proposed. By synthetically combining object models and backgrounds of complex composition and high graphical quality, we are able to generate photo realistic images with accurate annotated 3D pose for all objects in our custom created dataset. Using this network, we can estimate the object poses with sufficient accuracy for real world semantic grasping in a cluttered bin by real robot.

Value Added Abstracts

Pages: 10 - 10

Digital Twin Technology for Complex Industrial Processes

Chinemelu Ezeh

With ever accessible sensors, cheaper processing power, it has become possible to capture entire business processes in a digital replica. A digital replica enables rapid design, and verification in a safe, cost effective manner without committing too much capital to physical manifestations. Such a capability to virtually verify processes and products have existed in various forms at various levels of fidelity in industries such as manufacturing and logistics. This talk highlights the state of art in digital twin technology to accelerate verified design, visualise and communicate planned outcomes, in new manufacturing and logistics operations such as battery disassembly and complex logistics operations.

Value Added Abstracts

Pages: 11 - 11

How AI is Driving Digital Transformation and Improving Operational Efficiencies

Dan Turchin

Many CIOs are exploring AI (Artificial Intelligence) as a key enabler to roll-out their IT and data strategies and improve employee satisfaction. Join this informative session where Dan will describe how the automation of operations through AI has already generated successful outcomes, and also how CIOs are using AI to accelerate their digital transformation goals. Specifically you’ll learn:

  • How AI-driven automation can reduce IT costs.
  • How to improve operational efficiencies with AI.
  • Why AI-driven automation can improve customer satisfaction.

Five actionable tips to successfully incorporate AI into digital transformation initiatives.

Value Added Abstracts

Pages: 12 - 12

the AI-powered genetic test clarifying contraception

Daniel San Martin

Different hormonal contraceptives can affect people in a variety of ways. Finding the one which will lead to the fewest complications in an individual patient is something of a minefield, with many women testing various drugs through trial and error before they settle on one they’re happy with. Side effects like acne, weight gain, nausea and anxiety, to name but a few, mean patients may find certain hormonal contraceptives intolerable while others have few or no complications.

Historically, there hasn’t been any reliable litmus test to ascertain an individual woman’s response to a certain pill before she starts taking it. Now, the minds behind healthtech start-up Pexxi are using artificial intelligence (AI)-powered genetic testing to try and help patients avoid months – or even years – of contraception roulette. To use Pexxi, patients initially undergo a personal assessment via a chatbot questionnaire, which explores their unique mental and physical profile. Factors like having recently given birth, a history of blood clots or migraines can all mean a patient is unable to take oestrogen, which is found in combined contraceptive pills. Having this flagged from the beginning saves them from going through the genetic testing part of the process when it isn’t suitable for them.

Pexxi’s database currently contains information solely about contraceptive pills, but the company is looking to incorporate the contraceptive patch and ring further down the line. The technology is currently in beta-testing stages, which so far have been oversubscribed by 300%.

Value Added Abstracts

Pages: 13 - 13

Towards Aerial Humanoid Robotics

Daniele Pucci

Robotics is a very active research field of Engineering. Leaving aside the cognitive capabilities of intelligent robots, Manipulation and Locomotion remain two fundamental robot abilities that receive much of the scientific attention. The resulting endeavor gave impetus to new branches of Robotics aimed at combining Manipulation and Locomotion into single robotic platforms. Aerial Manipulation, for instance, conceives robots capable of flying while manipulating an object, thus unifying Manipulation and Aerial Locomotion. Humanoid Robotics, instead, merges Manipulation and Terrestrial Locomotion since humanoid robots can usually manipulate objects and move around by exploiting contacts with the environment (e.g. walking). This talk presents the first step towards unifying Manipulation, Aerial, and Terrestrial Locomotion by laying the foundation of Aerial Humanoid Robotics. Robots implementing Aerial Humanoid Robotics can then fly, walk, manipulate, and transport objects. Aerial Humanoid Robotics is paramount in disaster scenarios where legged robots struggle with stable and robust terrestrial locomotion on challenging terrains. Furthermore, Aerial Humanoid Robotics can be used as testbed for flying exoskeletons, which one day will allow humans for personal flight. 

Value Added Abstracts

Pages: 14 - 14

Systematic analysis of deep semantic segmentation architecture PSPNet on land cover ISPRS Vinhingen dataset

Eiman Kattan

This paper provides a systematic review of Pyramid Scene Parsing Network deep learning semantic segmentation architecture applied to remotely sensed areas in imagery. Firstly, the state-of-the-arts architecture of deep learning for image-based semantic segmentation is reviewed, highlighting its contribution and its significance in the field of image segmentation. Secondly, the ISPRS benchmark dataset (Vaihingen) is used in testing with a detailed experimental setting and analysis of challenges. Then, quantitative results of the pooling layers against pooling type are investigated for the described deep learning architecture, following up with a discussion of the results. Interesting findings are summarised and a recommendation of the wider implementation is also pointed out.  The main contribution of the research reveals that the deep learning architecture (PSPNET) can be efficiently applied to land cover classification of remotely sensed imagery with a classification rate up to 0794218%% as an average accuracy of the test set of Vaihingen dataset using four pooling layers against average pooling type which shows superior performance to small object segmentation such as the car class by 0.861777%. Moreover a comparison result of the four pooling layers against max pooling type architectures is also provided by achieving 0.7963976%. From a practical point of view, all the experiments were run using NVIDIA GeForce GTX 1080 Ti GPU. For coding the architectures, Python on tensorflow as the most sophisticated deep learning programming language was used. The implementation of the selected recently developed deep semantic segmentation methods has shown a very high level of detecting efficiency of all the annotation limitations in the evaluated data sets where a revisit is strongly recommended

Value Added Abstracts

Pages: 15 - 15

Proposal Toward �??No-Fault�?� Civil Liability Regulation following Artificial Intelligence Evolution in Health-Care

Emiliano Marchisio

Civil liability may be understood as indirect market regulation, since the risk to incur in liability for damages provide an incentive to invest in safety. Such an approach is inappropriate in markets of artificial intelligence devices. The current paradigm of civil liability allows redress only insofar “somebody” is identified as liable to pay it (either because of a fault or pursuant to a strict liability rule). However, robots and programs may “behave” far independently from instructions initially provided by programmers and constructors. This represents a disincentive to new technologies (artificial intelligence etc.) insofar as this determines charging producers and/or programmers with liability even if the damage derives from a perfectly “correct” functioning of algorithms and robots. This would not foster safety with respect to technological issues, because there would be no “fault” to blame or prevent. Instead, it would expose producers and programmers to unforeseeable liability, which would disincentive them from entering into the market or developing it, thus hindering technological evolution. Therefore, I think that artificial intelligence requires that redress obligations following damages not caused by negligence, imprudence or unskillfulness (i.e.: when producers and programmers complied with scientifically validated standards) should move from an issue of civil liability into one of financial management of losses. This could mirror, I propose, the current “no-fault” schemes adopted, with respect to, e.g., medical civil liability, in very few jurisdictions such as New Zealand. My paper focuses, in particular, on the market of health-care.

Value Added Abstracts

Pages: 16 - 16

Optimizing Heuristic Search Algorithms using Neural Networks

Amine Ouardi

At the opposite side of the uninformed search algorithms, performing a systematic search, heuristic search algorithms are based on multiple rules leading it to estimate, in a predictive way, the minimal cost of the path from the current state to the goal.

 

In this sense, A* algorithm is an example of heuristics-based algorithms that can guarantee to find a least-cost path to a goal state if this algorithm is using an “admissible heuristic”. A heuristic is said to be “admissible” if it never overestimates the real path cost from the current state to the goal. Furthermore, if the condition h(x) ≤ d(x, y) + h(y) is satisfied by the heuristic h (d denotes that edge length), for every edge (x,y), then h is called consistent. And with consistent heuristics, finding an optimal path without processing any node more than once is guaranteed.

 

The main idea consists on developing a Neural Network that can optimize those heuristics to further refine the A* algorithm results. Towards achieving that goal we must find the best synaptic coefficients, for that reason, a learning phase will be needed during which the network parameters are adjusted until the best admissible and consistent heuristic is obtained, dominating any other heuristic (h1 dominates h2 if for every node n (state), h1(n)>h2(n) ).

 

During this learning phase, and as inputs, the neural network will have some representative examples in the form of pairs of several problems and heuristics ({P1,h1};{P2,h2}...{Pn,hn}), to finally be able to calculate the best heuristic regardless of the inputs.

Value Added Abstracts

Pages: 17 - 17

Hierarchical Reinforcement Learning for Factual Claim Generation

Fatima T Al-Khawaldeh

We propose a novel Hierarchical Reinforcement Learning (HRL) Model to detect the factuality of claim trying to generate a new correct claim if not. Initially, we segment each sentence into several clauses using sentence-level discourse segmentation then measuring the cosine similarity to decide whether a clause is relevant to a claim or not. All relevant clauses will be sent to the high-level policy where deep communicating agents are implemented for encoding all relevant clauses. Each agent adopts the hierarchical attention mechanism, word-level and clause-level attention networks, to select informative words and clauses relevant to a specific claim. In word-level claim attention network, word encoding layer concatenates claim representation to each word embedding and then summarizes information by bi-directional LSTM. Word attention layer focuses on the terms that are important to the meaning of the clause with respect to the claim, producing clause vectors. In clause-level claim attention, clause encoding layer applies Bi-directional LSTM capture context clause representations. After that, in clause attention Layer, attention mechanism computes the attention weight between each claim-clause representation to produce contextual information conditioned on the claim representation. We will use the message sharing mechanism to help other agents’ encoders to generate better contextual information conditioned upon the messages received from other agents. The context and states from the environment are used to create all possible sub-goals (to copy or to generate) which should be achieved by the lower agent policy to select a series of actions (words) and produce a new sequence of words. We will apply a rewarder function to compute the factuality of the new claim using entailment and semantic similarity metrics.

Value Added Abstracts

Pages: 18 - 18

Overview: information compression as a unifying principle in human cognition as a foundation for the SP Theory of Intelligence

J Gerard Wolff

From pioneering research by Fred Attneave, Horace Barlow and others, there has been a progressive accumulation of evidence that compression of information is a unifying principle in human learning, perception, the processing of natural language, several forms of reasoning, and other aspects of human cognition.

These insights provide the foundation for an extended programme of research developing the SP System—meaning the SP Theory of Intelligence and its realisation in the SP Computer Model—and for an exploration of their many potential applications.

Central in the SP System is the powerful concept of SP-multiple-alignment, borrowed and adapted from the concept of ‘multiple sequence alignment’ in bioinformatics. This provides an effective means of compressing information, and is the key to the SP System’s strengths and potential in diverse aspects of intelligence, in the representation of diverse kinds of knowledge, and in the seamless integration of diverse aspects of intelligence and diverse forms of knowledge, in any combination.

There are many potential applications of the SP System. These include helping to solve nine problems with big data, helping to develop human-like intelligence in autonomous robots, helping to understand natural vision and the development of computer vision, providing the basis for the development of an intelligent database system, providing a basis the processing of natural language, and for machine-assisted medical diagnosis, and more.

It is intended that the SP Computer Model will provide the basis for the development of an industrial-strength SP Machine, starting with the application of high levels of parallel processing.

Value Added Abstracts

Pages: 19 - 19

Ethical Challenges in the Convergence of XR and AI

Emory craig

The convergence of immersive technologies (XR) with artificial intelligence (AI) will have a profound impact on culture and society. XR is not just another media form, but a rapidly evolving space for (virtual) human experience. Using AI, we will have the ability to create intensely realistic virtual environments and responsive avatars. These developments will reshape human experience and social interactions across multiple areas, including business, education, and entertainment. We already see early glimpses of this future in interactive experiences such as Fable's Wolves in the Walls and Magic Leap's AI avatar, Mica. But how will we address the ethical challenges of AI-driven avatars in virtual worlds? Could embodied avatars eventually seem more helpful than humans? More trustworthy? More threatening? What will happen when anyone can create an avatar that looks like anybody alive (or who has ever lived) and animate that avatar to do whatever they'd like? What obligations do AI developers have to consider and shape the long term social consequences of their work? As we move from coding platforms to the creation of virtual worlds and avatars, are we ready for embodied AI?

Past Conference Report

Pages: 20 - 20

International conference on artificial intelligence announcement

Editor

We are satisfied to invite you to the " 2nd International Conference on Automation and Artificial Intelligence London, UK | May 21-22, 2020 ". after the fruitful culmination of the arrangement of Artificial Intelligence Congress. The congress is planned to occur in the excellent city of Las Vegas, USA, on March 26-27, 2021.

This Artificial Intelligence 2020 gathering will give you an excellent research understanding and tremendous thoughts. This conference will also present the advanced research, advanced techniques for Automation and Artificial Intelligence and its related fields.

Artificial Intelligence 2020 will concentrate on the subject Innovations and headways in Artificial Intelligence. We are certain that you will appreciate the Scientific Program of this up and coming Conference. The point of view of the artificial Intelligence Conference is to set up cutting edge research to assist individuals with seeing how procedures have progressed and how the field has created as of late.

arrow_upward arrow_upward