Companies are increasingly convinced that they need to innovate to stay competitive. But with so many terms like Artificial Intelligence, Digital transformation, Internet of Things, Design Thinking, and more, managers often wonder where they should start. And starting this journey improperly can lead a company to decline.
Everyone likes to hear stories of success involving innovative actions. But something that people need to understand is that to succeed they need to identify what can bring them closer to failure. Some executives believe that innovation is a "project" with a beginning, a middle and an end. And it's not! Innovation is a journey, it is a change of culture, change of mindset of leaders and all collaborators.
The company needs to listen more, listen to employees, customers, suppliers. It is necessary to think about internal and external challenges that you would like to solve, about what are the demands of your clients that no one solves, about how to improve some experience or some process, about how to do more with less, about what are the answers (even before you have the questions).
This presentation discusses how don’t fall into the innovation pitfalls, explains how to start the journey of innovation, and will explain what Artificial Intelligence is with a fully business-oriented view, with practical and real cases.
Mr. Osamu Miyashita
J-Phoenix Research, Inc. (JPR) is a subsidiary of Scala Inc., which is listed on the first section of Tokyo Stock Exchange, and JPR is advising an AI-based engagement fund (AI-fund) operated by Scala, Inc. The AI - fund focuses the value creation opportunities of the undervalued companies by using various AI tools appropriate to the opportunities. For identifying the opportunities systematically, we use a consulting framework of GCC ManangementTM which we believe is the most important key concept for the success of running our AI-fund. GCC ManangementTM is a concept of systematizing shareholder value and employee happiness with three elements, “Growth, Connection and Confidence”, which are simple and easy to understand by everyone, including employee, AI engineer, CxOs, and professional investors and provide a consistent framework of “value judgement criteria” for AI tools. We try to illustrate how GCC ManangementTM works in this report
Since skin disease is one of the most well-known human ailments, intelligent systems for classification of skin maladies have become another line of research in profound realizing, which is of incredible importance for the dermatologists. The exact acknowledgement of the infection is very challenging due to complexity of the skin texture and visual closeness of the disease. Skin images are filtered to evacuate undesirable noise and furthermore process it for improvement of the picture. We have used 25,331 clinical-skin disease images , the training images from varying lesions of eight categories and having no-skin ailments at different anatomic sites to test 8238 images. The classifier was used for classification of skin lesions such as Melanoma, Melanocytic nevus, Basal cell carcinoma, Actinic keratosis, Benign keratosis, Dermatofibroma, Vascular lesion and Squamous cell carcinoma. Complex techniques such as Residual Neural Network (ResNET) which is a type of Convolutional Neural Network is used to classify the image and obtain the diagnosis report as a confidence score with high accuracy. ResNet is used to make the training process faster by skipping the identical layers. There is an effective improvement in training process in every successive layer. Analysis of this investigation can help specialist to in advance diagnosis , to know the kind of infection and begin with any treatment if required.
There are many interesting linguistic curriculum as National Technological University in Singapore and Berkeley University in the USA that improves their programs implementing some software knowledge that allows to their students understand some plans that will enable be implemented in Artificial Intelligence.
According to the World Economic Forum, Artificial Intelligence is a key driver of the fourth industrial revolution that everyone is living. It shows that the labor market is changing, and the people need to adapt to this understanding and offering added value. It because exits many services as chatbots, image descriptions, facial and voice recognition, translate programs, virtual assistants as Siri, Cortana, Google Home, and other programs in a linguistic professional could have a job.
So, some linguistic program curriculum in Latin America does not have and not deepen in this field, especially in Colombia. It allows finding an opportunity because many companies are an investment in Artificial Intelligence in Colombia increased from 3.3% in 2019 to 3,5% in 2020. Especially because Colombian Government seeking zero income tax to companies that invest in this for the first five years and because Colombia as an Organization for Economic Cooperation and Development (OCDE) member needs to have some ethical principles in Artificial Intelligence use, thereby, it is necessary create linguistic professional that could offer quality and quantity services in these fields. It allows no to offer a better quality life to linguistic professionals; it also increases the investment in Colombia according to Artificial Intelligence.
Human / AI interaction loop training as a new approach for interactive learning with reinforcement-learning: Reinforcement-Learning (RL) in various decision-making tasks of Machine-Learning (ML) provides effective results with an agent learning from a stand-alone reward function. However, it presents unique challenges with large amounts of environment states and action spaces, as well as in the determination of rewards. This complexity, coming from high dimensionality and continuousness of the environments considered herein, calls for a large number of learning trials to learn about the environment through RL. Imitation-Learning (IL) offers a promising solution for those challenges, using a teacher’s feedback. In IL, the learning process can take advantage of human-sourced assistance and/or control over the agent and environment. In this study, we considered a human teacher, and an agent learner. The teacher takes part in the agent’s training towards dealing with the environment, tackling a specific objective, and achieving a predefined goal. Within that paradigm, however, existing IL approaches have the drawback of expecting extensive demonstration information in long-horizon problems. With this work, we propose a novel approach combining IL with different types of RL methods, namely State–action–reward–state–action (SARSA) and Proximal Policy Optimization (PPO), to take advantage of both IL and RL methods. We address how to effectively leverage the teacher’s feedback – be it direct binary or indirect detailed – for the agent learner to learn sequential decision-making policies. The results of this study on various OpenAI-Gym environments show that this algorithmic method can be incorporated with different RL-IL combinations at different respective levels, leading to significant reductions in both teacher effort and exploration costs.
Mohamed Ben Haddou
Virtual Personal assistants are changing the way that we interact with our environment. The rise of voice-enabled devices, such as Amazon Alexa and Google Home, offer a natural and intuitive way to interact with machines to speed up and improve daily tasks. Today’s Smart Speakers are marketed as intelligent assistants able to understand, take decisions and support people in several tasks. However, this vision has yet to fully materialize. The next generation of VPA will be capable of carrying out complex tasks and non-routine work. They will to be able to follow entire professional conversations, formulate replies to business messages and seek documents in a company information system related to specific requests, follow up and manage users’ task and help plan meetings accordingly. The new generation should bring a paradigm change of how VPAs are designed and used, that could have profound implications on the way the business in conducted in the era of Voice computing.
Ms. Monika Koprowska
Data acquisition, cleaning and data analysis are inevitable in Data Science, along the process focus should be on business goal hence effective communication of data insights that can further lay the foundation for impactful business actions. During presentation I would share with you useful tips on how to communicate analysis outcomes to be understandable by business, often non-technical audience.
Dr. Marcell Vollmer
Marcell Vollmer will discuss how the digital economy is revolutionizing business models and how to run the customer centric enterprise processes. Sharing insights from innovations such AI, ML, Blockchain and IoT and new applications including chatbots. He will inspire you to think of the new ways you can drive value for your business and the become more customer centric in all processes. As former C-Level Executive at SAP, Marcell redefined business models, delivered innovations and transformed global organizations towards a strategic, end-to-end driven customer- centric organization leveraging automation, process mining and artificial intelligence.
In this keynote he will share insights on successful transformations of business models and how to scale by adapting the digital transformation with AI and ML to prepare for the future
Adebayo Adedapo Emmanuel
Privacy is a major concern for governments, corporate organizations and individuals and emergence of digitized methods for the storage of information has pushed man-kind into the 21st Century and also has brought about the need for data privacy and confidentiality. Cyber crime began to take off in early 2000’s when social media came to life and usage of internet resources allows for a user to disclose information in certain contexts, while the information remains protected and it’s uses remains limited by an obligation to maintain confidentiality. Re-searchers and other controllers and processors of person-al data have to protect the personal information from un-authorized access as the date of birth collected as part of personal information can pose a great risk to the privacy rights of a user and increase the rate of Identity theft. This research is based on collection of personal informa-tion from 300 users (age 22-60), using numerological ap-proach. The information collected from every user was analyzed by reducing the date of birth of each user to a numerical value which in turn gives a life path number and the life path number revealed who the user is, their deepest values and some of their life challenges. From the analysis of the data, 258 users confirmed that the result was very accurate,23 users said it’s somewhat accurate, 12 users are not sure(50/50), 3 user said its inaccurate and there are 4 void result. The results of the analysis carried out on the user data provides proof that the provision of a users birth information posses a great threat to the privacy and safety of a user and also instrumental in cyber crime and social engineering process.
B. J. Samuel
Silver salt and its colloidal formulations have been used since ancient times to treat ulcers, burns chronic wounds and other infections but its use was discontinued due to the interfering effects of salt and the development of new antibiotics, However, almost a decade back, renew interest grew for nanosilver owing to its high surface area to volume ratio and size-dependent unique optical, electrical, and thermal properties. Silver nanoparticles were synthesized using eco-friendly method with extract of Carica papaya as reducing and stabilizing agent. The silver nitrate solution was used as precursor. A visible colour change from yellow to dark brown confirmed the formation of the nanoparticles and the UV-Vis spectroscopy showed an absorbance of 0.69 at wavelength 435 nm for the silver nanoparticle. The antimicrobial activity of the synthesized nanoparticles was studied against Escherichia coli, Salmonella typhi and Staphylococcus aureus. The silver nanoparticles biosynthesized showed antimicrobial activity against the test isolates with zone of inhibition ranging from 10-15mm. Antimicrobial activity of silver nanoparticles was statistically significant as compared to Chloramphenicol at 0.05% level of confidence. Generally, the Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC) values for Silver nanoparticle ranged from 0.22 – 0.42 g/ml and 1 – 30 cfu/ml respectively, against the test organisms. S. aureus was most sensitive while E. coli and S. typhi were least sensitive to silver nanoparticles, while the activities of C. papaya extract ranged from 0.32 – 0.42 g/ml and 6 – 36 cfu/ml respectively, against the test organisms E. coli and S. aureus were more sensitive while S. typhi was least sensitive to C. papaya nanoparticles.
SOC analysts are under siege to keep pace with the ever-changing threat landscape. The analysts are overworked, burnout and bombarded with the sheer number of alerts that they must carefully investigate. This intense workload can be a true testament against anyone’s patience. We need to empower the Security Operations Center (SOC) analysts with data science implementations to overcome this monotonous work that is leading to career burnout. Security departments should be seeking data-driven approaches for more efficient evaluations on operations. Data Science use cases like detection rule scoring and DGA detection through machine learning are example implementations with immediate value add. With this insight security engineers, management and analysts alike can be empowered to make data driven decisions to tune and lessen the burden on the SOC from investigating fewer false positive related cases.
As hacking stories have increasingly made news headlines, those behind the attacks have become ever more adept at co-opting the media into covering their leaks. From Ashley Madison and Sony to the 2016 US Presidential election, news coverage of cyber attacks has become increasingly weaponized by those behind them. Hacking groups have in some cases made direct contact with journalists to steer them towards a particular story, and in other cases have carefully triaged and strategically leaked the data for maximum impact. This has left journalists in a difficult position: exposing the source of the information could well be unethical, but keeping them anonymous could result in the journalist being manipulated to achieve the aims of a hidden and malign actor who escapes culpability. For hackers wishing to expose government or corporate malpractice, it raises practical issues about how to approach and work with news outlets and their staff; both in terms of security and anonymity, but also the ethics of the eventual news coverage that results. Handing information over to a journalist may mean handing over control of how it’s used. In this talk, investigative journalist Geoff White (who has covered technology for, among others, BBC News, Channel 4 News and Forbes) takes five high-profile hacking incidents and analyses how they reflect key trends and tactics for working with (and some cases manipulating) the news media
Osteoarthritis (OA) is the most common reason of disability among the ageing population. The awareness of machine learning as a tool in medicine is growing rapidly and has provided new avenues for research into a number of diseases and infections. Creating better predictive models for these diseases could provide opportunities for better care, which we have applied to osteoarthritis, a degenerative disease that affects a large number of both gender in older population. A number of studies have been undertaken in order to understand the prediction of Osteoarthritis risks using data mining techniques. Hence, this study is focused at using two different types of s data mining techniques to predict Osteoarthritis risks in Nigerian patients using the Naïve Bayes’ and the K nearest neighbor algorithms. The performances of these two classification techniques was evaluated in order to determine the most efficient and effective model. To achieve this, a dataset containing patients who have participated in an osteoarthritis treatment program was used and analyzed. The Naïve Bayes’ showed a higher accuracy with lower error rates compared to that of the KNN method while the evaluation criteria proved the Naïve Bayes’ to be a more effective and efficient classification techniques for the prediction of Osteoarthritis risks among patients of the study location. Our results shows that it is possible to predict an efficient and effective classifier for Osteoarthritis risks.
Njoku Reuben Chukwuemeka
There is need for improve security in ATM transactions. Due to tremendous increase in the number of criminals and their activities, the ATM has become insecure. There could be situations where an account holder is incapacitated or a case of emergency that would make the account holder not to be physically present at the ATM point; ATM systems today use no more than an access card and PIN for identity verification. The Personal Identification Number (PIN) not only gives good security. The fingerprint is unique and cannot be duplicated by others. This paper combines the GSM technology and fingerprint recognition technology for identification. With fingerprint recognition technology, we embedded the GSM modem to generate the 4 digit one time password and it send to the main user mobile number only when the nominee user enrol the fingerprint. The fingerprint of the nominee and the card holder are collected and stored in the database. Every fingerprint which enrols is check by the database. The 4-digit one time password should be replied by pressing the keys on the mobile phone. After which, the customer can begin the further transaction. We also proposed room for many more nominees fingerprint identification process while actual card holder unable to do the transactions. The biometric features cannot be replicated; this proposal will go a long way to solve the problem of account safety. The Nigerian Banking sector over the years has been experiencing significant changes and development in its Information and Communication Technology. Among the development is the introduction of Automated Teller Machine (ATM) that intends to decongest the banking halls as customers now can go to any nearest ATM outfit to consummate their banking transactions such as: cash withdrawal, cash deposit, bill payments, and transfer of fund between accounts. The purpose of this paper was to measure the satisfaction of customers as regards to ATM services.
Why do we conduct security penetration testing? What’s the objective? What’s the right approach? Do we have the right supplier? Does the methodology matter? Is it worth it? What should we get for our investment? How can we prove it? More importantly, how can we improve it? Very few businesses have answers to these straightforward, practical questions, yet continue to spend vast sums conducting security penetration testing year after year with little tangible return. This session begins by presenting a quick, simple formula template for calculating the annual loss expectancy (ALE) and return on investment (ROI) required for establishing a business case for a security penetration testing program. The presenter then discusses how to ensure the right testing approach, objective, scope, methodology, qualifications, reporting formats are used for your next test providing over 30 specific actions for improving the ROI for security penetration testing. The session delivers simple, pragmatic, cost-effective actions attendees can take back to their businesses for implementation. Upon completion, attendees will receive a “take-away” list of these recommended actions for their reference. The content of this presentation is based on over 20 years of penetration testing case studies and is devoid of commercial content
Mission-critical infrastructures and systems are increasingly adopting enterprise blockchain and distributed ledger technologies without fully understanding the vulnerability, threat and risk environment that exists in today’s environment. Further; organizations are less prepared for the coming ominous threat from Quantum Computers (QC). In 2018, Gartner revealed that Quantum Computing (QC) is a digital disruption that organizations may not be prepared for and CIOs may not see coming. Cyber-attacks on enterprise distributed ledgers can adversely impact intellectual property, healthcare information, personally-identifying information, human safety, and the reputation and trust of organizations. Enterprise Distributed Ledgers are complex systems that incorporate cryptography, fault-tolerance, and distributed consensus and its cybersecurity is not well understood in a systematic and thorough way. Standard network defenses and threat detection are not enough thwarted adversaries from exploiting vulnerable distributed ledgers in a PQC environment. This threat environment requires a new Cyber Resilient approach that is built-in and accounts for the arrival of large-scale, fault-tolerant QCs to be immediately implemented into enterprise blockchain deployments. To date there are no known public plans or strategies to incorporate Cyber-Resilience into enterprise distributed ledgers in the Post Quantum Cryptography (PQC) era. This research proposes an enterprise distributed ledger Cyber Resilient framework that can be used to analyzing the security and risk of enterprise distributed ledger systems. Specifically, the author proposes metrics, analysis methodologies and performance parameters to be included in an enterprise distributed ledger framework that is measurable, testable, and affordable in the PQC era
Pablo Escobar de la Oliva
As organizations look for ways to improve operational efficiency with new Enterprise Applications and applying new Emerging Technologies, the Operationalization of Artificial Intelligence solutions becomes more relevant and the Integration between Automation solutions like RPA with the Data and Analytics Platforms is critical to deliver automation and business answers. In the first part of the session we will see the Automation Challenges using AI new techniques and Data & Analytics Enterprise Platforms integrated with Automation tools. The second part of the session will cover the case study presentation of the Enterprise Cognitive Automation solution we have implemented around Documentary Processes. The solution implemented is able to execute cognitive capabilities to non-structured scanned documents like Intelligent Classifications, Intelligent Concepts Extractions and Intelligent Decisions using latest Artificial Intelligence techniques. The solution is enabling the automation of Loan Application and Digital Onboarding processes at enterprise scale with relevant business outcomes.
Artificial Intelligence permeates in almost every facet of life. In today’s scenario it is making its mark in various functions of the organization. It has also found its importance in HR related activities such as talent acquisition for organizations with respect to the requirements of it using various AI tools. This is made possible due to intelligent filtering tools used upon Big Data. DNA profiling is area of future technological necessity, which generates a complete characteristic chart of a person. This when combined with other information can be of great use in Acquisition process in certain areas. Military and Defense related services have lots of medical and other physiological consideration of candidates involved. There is huge amount of tax payers’ money which is used up by the Government for checking the medical and physical traits of several candidates for the job purposes. We propose a technique which may help in analysis of physical traits of the candidates preemptively which may help to reduce the expenditure of checkups and other recruitment processes. This technique would be to synchronize the DNA profiling with Artificial Intelligence tools for to preemptively filter out the candidates with required physical considerations for the specific job area which is Military/ Defense related job acquisition process. Further experimental proof done in targeted manner is the justification for the hypothesis.
Mr. Ylber Limani
Business processes are undergoing complex challenges related to the fast-technological changes. The proper functionality of processes necessitates increased flexibility, higher reliability and augmented working speed of production systems and processes. The integration of information technology is accomplished by the development and use of cyber-physical systems, which actually are the enablers of the industrial alteration named “Industry 4.0”. The debates about the digital transformation and competitive challenging advantages directed the industries to the creation of a new business vision named "Industry 4.0". Since the concept of Industry 4.0 and its impact on business processes is creating various challenges, this research addresses and examines the consequences and potentials of Industry 4.0 on advanced business transformation processes. The scope of this research is limited to the study of functional integration of Cyber-Physical Systems, Artificial Intelligence (AI) and Data Science (data security) providing the potential for the functioning of new technologies with focus on developing countries.
The research utilizes qualitative and quantitative approaches to data collection and analysis based on literature and on the case studies. The contribution of this research is focused on the identification and analyses of needs, problems, and benefits related to the implementation of Industry 4.0 on business automated processes in developing countries.
Dr. Yifan Zhu
Medical artificial intelligence (AI) is a hot point in AI utilities now, of which clinical diagnosis is one important AI administration. Diagnostic AI (D-AI) is the simulation of doctor’s clinical thinking for clinical analysis and decision. These abilities come from many-years medical education and clinical experience. Therefore, D-AI should also be based on medical knowledge from medical university and clinical experience. However, at present most D-AI is based on big data analysis, it is now right. The first big question of big data based D-AI is that it is not evidence-based analysis which is absolutely required by evidence-based medicine in clinical analysis and decision. The second is that its algorithm can not be interpreted by medical knowledge, thus could induce algorithm black box, which is now commonly suggested to avoid. A medical knowledge based D-AI algorithm is needed to set up right D-AI, simulating clinician’s clinical thinking.
We'll see why and how it is very important to compute uncertainty in inferential statistics and predictive machine learning models.
1) Deep dive in random forest
Random Forest gives us naturally an estimation of the distribution for each sample thanks to the bagging technic.
2) Generalisation for regression
The quantile loss is useful to compute prediction intervals for every regression model. It is however a computationally costly. Certain loss like cosh can help against this con.
3) What about classification
In classification, probability is a measure of the uncertainty... but does every models give us good probabilities ? Let plot some reliability curve to check if we need to calibrate the output with a sigmoid a an isotonic regression !
Mr. Rudradeb Mitra
Omdena is a global platform where AI engineers & enthusiasts from diverse backgrounds collaborate to solve complex social problems. We are:
1. Creating opportunities for all: Online education has given access to knowledge to people even in remote parts of the world but not to opportunities.
2. Building AI for good: We believe the biggest value of AI is to solve social problems helping people at the bottom of the pyramid, but not much is done despite people talking about it.
3. Incentivizing collaboration rather than competition. And people learn a lot. What people say about Omdena.
4. Building Ethical AI: We believe involving a global community from different backgrounds will help us to build ethical AI solutions.
In 5 months since we started, over 350 AI enthusiasts and engineers from 54 countries participated in challenges like fighting hunger, sexual harassment, conflicts, PTSD, Gang violence working with organizations like UN Refugee Agency, UN World Food Program, Safecity, etc.
Mr. Sam Geogi
In hot rolled steel production surface defects occur due to material problems or process problems. The production quality should be monitored real time to identify the surface defects occurring. This will help to easily identify the cause of the defect and solve it. Thus, by correctly identifying the defects in real time we could rectify the problem avoiding defective production and saving in material and process cost. The production speeds for hot rolled steel sheets will reach up to 18m/s. This makes the real time monitoring extremely difficult as the detection system must have a very high detection speed. The detection system also must have a high prediction accuracy to conform to the industrial quality management standards. The surface defect detection problem is to identify defect class in a steel surface. Automatic visual inspection systems were in the industry for a while, but they were so sensitive to the environment and could only use for a particular system. Here we are using convolutional neural network approach for identifying the type of defect. With the advancement of Deep Learning especially Convolution Neural Network (CNN) the image classification has become more sophisticated and accurate. CNN has the potential for high detection speed and at the same time high accuracy predictions. For training the neural network a dataset of 1800 images belonging to 6 defect classes were selected from North Eastern University-USA steel defect dataset. From experimentation we have learned that for this dataset sequential model architecture could be used and 8 layered CNN model is used. Loss function used is categorical cross-entropy. The optimizer function used is Nadam. The images have been pre-processed using Keras pre-processing to improve the dataset variability the various parameters that have been changed during pre-processing include width shift, height shift, shear zoom, horizontal flip and rotation. After the pre-processing process using the Keras a dataset with more variability is obtained. Max pooling is done to reduce the amount of data without affecting the quality at each level and also used a fully connected layer at the end to enable classification. The model been built to 8 layers with alternate convolutional and maxpooling layers and fully connected layer at the end. The built model then has been trained with image dataset the training is done on COLAB. The convolutional layers used 3*3 filters. The activation function used is Rectified Linear Unit. The input shape of the images is 200*200 gray scale. The classification activation is done by softmax function. There is 68,16,198 total trainable parameters in the model. The model is trained up to 500 epochs. To improve the accuracy tuning of hyper parameters have been carried out various parameters that have been fine-tuned were, loss function which is used to find the error from model output against the desired output. Then the optimiser functions were fine tuned optimizers are functions used to change the attributes of neural network such as weights to reduce the losses. The performance evaluation of the CNN model and tuning of hyper parameters were carried out to obtain maximum accuracy. The model output accuracy of 99.36% was obtained through the fine tuning. The detection speed has been reduced to microseconds. The images collected from the local industries can be used to test the validity of the model. The implementation with YOLO V3 can be done to obtain for faster image detection on real time manufacturing scenarios.
Mr. Sameh Magdy
In this paper, a new face recognition technique based on Hidden Markov Model (HMM), Pre-processing, and feature
extraction (K-means and the Sobel operator) is proposed. Two main contributions are presented; the first contribution in the pre-processing were image’s edges are normalized to enhance the HMM models to be non- sensitive to different edges. The second contribution is a new technique to extract the image's features by splitting the image into non-uniform height depending on the distribution of the foreground pixels. The foreground pixels are extracting by using the vertical sliding windows. The proposed technique is faster with a higher accuracy with respect to other techniques which are investigated for comparison. Moreover, it shows the capability of recognizing the normal face (center part) as well as face boundary
Ubiquitous computational power. Faster Data processing.
Rapid progress of analytic techniques. We are amid
major changes all around us and they are happening at an
exponential pace. Artificial Intelligence (AI) – which aims
to mimic human cognitive functions – is bringing a paradigm
shift to the field of radiology. In the last decade, AI
techniques known as deep learning have delivered rapidly
improving performance in image recognition, caption
generation, and speech recognition. Further implementation
of AI in radiology will significantly improve the
quality, value, and depth of radiology’s contribution to
patient care and revolutionize radiologists’ workflows.
However, recent reports of health information technology
(IT) show that the acceptance between purchased
technologies and clinical work systems is critical in determining
intended end users to accept or reject the technology,
to use or to misuse it, or to incorporate it into their
clinical workflows or work around it. This paper assesses
technology implementation frameworks in the context of
AI in radiology and employs a widely accepted and validated
technology acceptance framework - the Technology
Acceptance Model (TAM). The model is built on the
premise that when an end-user is introduced to a technology,
there are constructs and relationships that influence
when and how a user will interact with the technology.
In addition, the findings can further inform and provide
guidance for policymakers, AI application developers,
and business management on the educational needs of
radiologists, research and development, and the role of
radiologists in moving forward with AI in radiology.
Human divers are exposed to health and safety risks every time they perform subsea activities like inspection and manipulation work and are also limited in their practical depth. Consequently, underwater robots have filled that role with varying degrees of success. Directly teleoperated via a tether, Remotely Operated Vehicles (ROVs) possessing no autonomy and have become the norm subsea. Their smarter cousins - Autonomous Underwater vehicles (AUVs) - have no umbilical but until now, have had several drawbacks of their own: preprogrammed missions without the option for operators to intervene while lacking any manipulation capability to accomplish ROV-style tasks. This presentation presents a third option: an autonomous underwater robotic vehicle called Aquanaut. Aquanaut’s novel, shape-shifting morphology gives it the ability to conduct long-range, efficient transit while also hovering with full attitude control for stable, in-close manipulation tasks. As a tetherless, subsea, transformable robot, Aquanaut is able to perform subsea services like inspection, maintenance, repair, etc using powerful subsea perception techniques and machine learning strategies that enable autonomous behaviors for manipulation. This new type of subsea vehicle is enabled by a NASA-inspired spaceflight robotics command and control (C2) architecture that offers the latest in autonomous behaviors yet provides operators the flexibility of user-in-the-loop control over low data rates, thereby eliminating the need for costly top side infrastructure and mission-limiting tethers. This presentation will introduce Aquanaut’s underwater perception system, learning principals and training techniques also discussing the manipulation and grasping autonomy that allows Aquanaut’s high degree of self-sufficiency and self-directedness for subsea environments.
J-Phoenix Research, Inc. (JPR) is a subsidiary of Scala Inc., which is listed on the first section of Tokyo Stock Exchange, and JPR is advising an AI-based engagement fund (AI-fund) operated by Scala, Inc. The AI - fund focuses the value creation opportunities of the under- valued companies by using various AI tools appropriate to the opportunities. For identifying the opportunities systematically, we use a consulting framework of GCC ManangementTM which we believe is the most import- ant key concept for the success of running our AI-fund. GCC ManangementTM is a concept of systematizing shareholder value and employee happiness with three el- ements, “Growth, Connection and Confidence”, which are simple and easy to understand by everyone, including employee, AI engineer, CxOs, and professional investors and provide a consistent framework of “value judgement criteria” for AI tools. We try to illustrate how GCC ManangementTM works in this report