Discover the world's scientific knowledge
With 135+ million publication pages, 20+ million researchers and 1+ million questions, this is where everyone can access science
You can use AND, OR, NOT, "" and () to specify your search.
Dear Research Gate community,
I need your help:-)
I am writing a thesis, my overall research question is:
What are the factors that influence consumers’ acceptance of voice shopping using voice assistants with screens (e.g. Amazon Echo Show) and without screens (e.g. Amazon Echo)?
Theoretical Model: UTAUT2
- How can I best compare voice assistants with a screen (e.g. Amazon Echo Show) and those without displays using UTAUT2? → As a moderator or by using 2 questionnaires?
- Also, I thought to include 2 different product types (habitual goods vs. a more complex product like sneakers)
-> Is this too much?
Thanks for your opinion!
Voice assistants in mobile phones like Siri, Google have their own set of voice modulations. is there any possible way where I could use my own voice to these.
Affective technologies are the interfaces concerning the emotional artificial intelligence branch known as affective computing (Picard, 1997). Applications such as facial emotion recognition technologies, wearables that can measure your emotional and internal states, social robots interacting with the user by extracting and perhaps generating emotions, voice assistants that can detect your emotional states through modalities such as voice pitch and frequency and so on...
Since these technologies are relatively invasive to our private sphere (feelings), I am trying to find influencing factors that might enhance user acceptance of these types of technologies in everyday life (I am measuring the effects with the TAM). Factors such as trust and privacy might be very obvious, but moderating factors such as gender and age are also very interesting. Furthermore, I need relevant literature which I can ground my work on since I am writing a literature review on this topic.
I am thankful for any kind of help!
I have conducted an experimental research last week. The research topic is about Influence of Voice Assistants on Satisfaction, Attitude and Purchase Behaviour.
I had 46 participation in total and 23 participant for each control and experiment group.
I add my conceptual framework in attachment. Each Factors has 4 items in the questionnaire. Questions were designed with 7 Point-Likert scale from 1 to 7 (ordinal).
Therefore which tests do i need to run to get best results? What is the best test for comparing control and experiment groups? T-Test? KS-Test? Or what is the best test to understand the correlations between variables? Factor Analysis? Regression Analysis?
I'm a bit confused and I don't know where to start and which steps to follow? I'd appreciate your helps. Thanks.
This question isn’t easily answered, and I am curious to see what other risks and uses that may arise in the future.
The field of AI is extremely broad and the concept itself is often vaguely defined. That being said, one thing is for sure, AI truly is a game changer. AI has been described as a ‘disruptive technology’, that is, a technology that displaces well-established products or technologies, creating whole new industries and markets. Some examples of disruptive technologies include steam power and the computer.
A quick glance at the potential uses of this technology makes this classification easy to understand. AI has the potential to reshape the world around us and it may come as a surprise to some how pervasive this technology already is in our modern, tech-filled lives. Let’s have a look at some of the ways that AI is currently in use, before getting into some more technical aspects of the technology.
Four main uses of AI in our lives right now:
- Autonomous vehicles: Vehicles that are able to navigate without a human operator.
- Voice assistants: Siri, Cortana, Alexa. Our main interactions with AI at the moment are with these friendly but devoid of personality assistants.
- Algorithms: From deciding if your loan application is approved to sifting through job applications, these tools are seeing an exponential uptake in use.
- Facial recognition: Simply unlocking your phone, or an Orwellian tool for surveillance? It could go either way.
Those are just a few of the ways we see AI in action today. We all know that computers can do amazing things, but those outside the field of computer science may struggle to understand how they do what they do. AI, in that regard, is no different.
AI can be broadly categorized into five categories:
- Machine Learning: The process of teaching computers to recognize patterns from data, which underpins most AI applications.
- Deep Learning: A subset of machine learning, utilising neural networks (a bit like a human brain) to learn from mistakes. This is how computers are beating chess champions in record time.
- Natural Language Processing: The ability for computers to comprehend human language. Think of the previously mentioned virtual assistants.
- Machine Perception: Allows computers to understand complex data and the world around them. Autonomous vehicles make use of this technology to identify pedestrians and road signs.
- Generative Adversarial Networks: The AI tech behind the worrying ‘deepfake’ videos surfacing on the web.
The possibilities for the future use of these AI technologies is limitless. AI has computational power greater than the human brain by orders of magnitude. Efficiency could be greatly increased in transport networks, agriculture, medicine and countless other sectors of human activity.
What are the risks?
If you have ever watched a Terminator or Matrix film then you know about the risk of a singularity occurring. Basically, this is the point where an AI supersedes us humans as the most intelligent ‘being’ on earth. This is unlikely to occur and if it did, it would be far, far into the distant future.
Officials are less concerned about an apocalyptic AI takeover and more concerned about the practical risks associated with this technology. There are some serious risks that should be on the minds of legislators:
● Bias and discrimination: AI systems do not design themselves; humans design these tools. Humans also carry inherent biases and values which can make their way into the code. The Guardian investigated this problem in 2018, where they found that googling the phrase “unprofessional hairstyles for work” showed images of mainly black women with natural hair, but when they searched “professional hairstyles” it showed pictures of coiffed white women.
● Invasion of privacy: AI can threaten privacy both due to its design and its deployment. A massive amount of personal data is already collected by big tech companies. However, the use of AI in the space of ‘big data’ could create serious problems, where AI systems can target, profile, or nudge data owner subjects without their knowledge or consent.
● Unclear objectives and unintended outcomes: The World Economic Forum outlines this dilemma quite well. Imagine that you tell an AI system to eradicate cancer from the world. It does so, but only by eradicating humans. The system succeeded in its goal very efficiently, but not in the way humans intended it to. Consider also the moral dilemma with driverless cars. In the event of a crash, does the car kill the 4 older people inside the vehicle, or does it swerve and kill the 6 younger people on the sidewalk?
I'm sure there are many more risks and benefits associated with this disruptive technology. And I am very interested in hearing your opinions!