Conference PaperPDF Available

Identifying User Experience Aspects for Voice User Interfaces with Intensive Users

Authors:

Abstract

Voice User Interfaces (VUIs) are becoming increasingly available while users raise, e.g., concerns about privacy issues. User Experience (UX) helps in the design and evaluation of VUIs with focus on the user. Knowledge of the relevant UX aspects for VUIs is needed to understand the user’s point of view when developing such systems. Known UX aspects are derived, e.g., from graphical user interfaces or expert-driven research. The user’s opinion on UX aspects for VUIs, however, has thus far been missing. Hence, we conducted a qualitative and quantitative user study to determine which aspects users take into account when evaluating VUIs. We generated a list of 32 UX aspects that intensive users consider for VUIs. These overlap with, but are not limited to, aspects from established literature. For example, while Efficiency and Effectivity are already well known, Simplicity and Politeness are inherent to known VUI UX aspects but are not necessarily focused. Furthermore, Independency and Context-sensitivity are some new UX aspects for VUIs.
IdentifyingUserExperienceAspectsforVoiceUserInterfaceswithIntensiveUsersKristinaK¨olln1a,JanaDeutschl¨ander2b,AndreasM.Klein3c,MariaRauschenberger1dandDominiqueWinter4e1FacultyofTechnology,UniversityofAppliedSciencesEmden/Leer,Emden,Germany2BerlinerHochschulef¨urTechnik,Berlin,Germany3DepartmentofComputerLanguagesandSystems,UniversityofSeville,Seville,Spain4UniversityofSiegen,Siegen,GermanyKeywords:VoiceUserInterface,VUI,UserExperience,UX,VoiceAssistant,HCI,User-centered.Abstract:VoiceUserInterfaces(VUIs)arebecomingincreasinglyavailablewhileusersraise,e.g.,concernsaboutpri-vacyissues.UserExperience(UX)helpsinthedesignandevaluationofVUIswithfocusontheuser.Knowl-edgeoftherelevantUXaspectsforVUIsisneededtounderstandtheuser’spointofviewwhendevelopingsuchsystems.KnownUXaspectsarederived,e.g.,fromgraphicaluserinterfacesorexpert-drivenresearch.Theuser’sopiniononUXaspectsforVUIs,however,hasthusfarbeenmissing.Hence,weconductedaqualitativeandquantitativeuserstudytodeterminewhichaspectsuserstakeintoaccountwhenevaluatingVUIs.Wegeneratedalistof32UXaspectsthatintensiveusersconsiderforVUIs.Theseoverlapwith,butarenotlimitedto,aspectsfromestablishedliterature.Forexample,whileEfficiencyandEffectivityarealreadywellknown,SimplicityandPolitenessareinherenttoknownVUIUXaspectsbutarenotnecessarilyfocused.Furthermore,IndependencyandContext-sensitivityaresomenewUXaspectsforVUIs.1INTRODUCTIONAVoiceUserInterface(VUI)isanykindofsoftwareanddevicecombinationcontrolledbyuser’sspokeninput.VUIshavebecomeincreasinglypopularinrecentyears,andtheiruseispredictedtoriseevenmoreinthefuture(StrategyAnalytics,2021).However,althoughalotofpeopleownaVUI(e.g.,intheirsmartphone),theydonotnecessarilyusethem.Possiblereasonsfornon-usearediverse,e.g,fearofdatamisuseandmonitoring.Yet,ontheotherendofthespectrumisagroupofintensiveusers(Kleinetal.,2021).Theseintensiveusersshowanappreci-ationfortheuseofVUIsthatgoesbeyondthepurefunctionality,i.e.,userexperienceaspectsofVUIs.TodevelopapositiveUserExperience(UX),theHuman-CenteredDesign(HCD)Frameworkhasbe-ahttps://orcid.org/0000-0002-8625-4903bhttps://orcid.org/0000-0003-3851-4384chttps://orcid.org/0000-0003-3161-1202dhttps://orcid.org/0000-0001-5722-576Xehttps://orcid.org/0000-0003-2697-7437comewidelyaccepted.HCDisaholisticapproachfordesigningaUXthatfitsthetargetgroupbyfocusingontheuser(ISO9241-210,2019).WeshouldknowwhichUXaspectsuserstakeintoaccountwheneval-uatingthequalityofVUIs,sincedifferentUXaspectsareimportantfordifferentusersorproducts(Meinersetal.,2021).Forexample,someusersareconcernedaboutwhichdataiscollectedandhow,whileothersmentiontheneedforhigheraccuracyofcommands(Rauschenberger,2021;Kleinetal.,2021).Recentresearchincludesseveralattemptstode-fineimportantUXaspectsofVUIusinganexpert-drivenprocess(HoneandGraham,2000;Kocaballietal.,2019;Kleinetal.,2020a).Tothebestofourknowledge,however,thereisnouser-drivenidentifi-cationofrelevantUXaspectsforVUIsthatisbasedonup-to-dateuserdata.Inthisarticle,wepresenttheidentifiedUXas-pectsusingauser-centeredmixed-methodsapproach(McKim,2017;ISO9241-210,2019).WechosetoconcentrateonintensiveusersbecausetheyengagewithVUIsindepthandcanofferprofoundinsightsKölln,K.,Deutschländer,J.,Klein,A.,Rauschenberger,M.andWinter,D.IdentifyingUserExperienceAspectsforVoiceUserInterfaceswithIntensiveUsers.DOI:10.5220/0011383300003318InProceedingsofthe18thInternationalConferenceonWebInformationSystemsandTechnologies(WEBIST2022),pages385-393ISBN:978-989-758-613-2;ISSN:2184-3252Copyrightc2022bySCITEPRESSScienceandTechnologyPublications,Lda.Allrightsreserved385
intoextensiveusagescenarios.Thisarticleisstructuredasfollows:Section2in-troducesUX,recentresearchaboutUXofVUIs,andmixedmethods.Section3explainsourmethodologybydescribingtheinterviewandsurveyprocess,theparticipants,andthequalitativecontentanalysis.Sec-tion4presentsanddiscussesourresults.Section5finisheswithaconclusionandfuturework.2BACKGROUND&RELATEDWORKCurrentchallengeswhenusingVUIsare,e.g.,speechintelligibility,correctcommandexecution,datasecu-rity,andprivacy(Kleinetal.,2021;Tasetal.,2019;Rauschenberger,2021).UXassessmentbyconsider-ingspecificUXaspectsforVUIsisanessentialevalu-ationmethodforovercomingbarriersandskepticismaswellasmeetingusers’needs.Inthefollowing,webrieflyintroduceUX,howtoidentifyUXaspectsforVUIs,andVUIassessmentapproachesandmethods.UXisaholisticconceptthatconsidersemotion,cognition,andphysicalactionbefore,during,andaf-terusingaproduct(ISO9241-210,2019).UXhasasetofdistinctqualitycriteria:pragmatic,i.e.,clas-sicalusabilitycriteriasuchasefficiency,andhedo-nic,i.e.,non-goalcriteriasuchasstimulation(Preeceetal.,2002).TheseUXqualitycriteria,alsocalledUXaspects,canbeidentifiedandevaluated,e.g.,byconductingempiricalstudies.Focusingonrel-evantUXaspectsenablesefficientproductdevelop-mentandevaluation,e.g.,byusingthemostsuitablequestionnaires(Winteretal.,2017).Still,thereisnoconsensusonUXmeasurementspecificallyforVUIs(SeabornandUrakami,2021).VariousmethodsareavailableforVUIevaluation,buttheydonotnecessarilyfocusonUX.Astudyanalyzedsixquestionnairesthatarecommonlyap-pliedforVUIevaluationandassessedtheirsuitabilityregardingvariousUXdimensions(Kocaballietal.,2019).ItsauthorsrecommendeithercombiningquestionnairestocoverUXmorecomprehensivelyormeasuringadistinctUXdimensionindetail.AnotherVUIevaluationmethodistheapplicationofheuristics,whichareguidelinesfordesignandevaluation.TheymostlyfocusonusabilityandoverlookcertainUXaspects(WeiandLanday,2018;Langevinetal.,2021).AnotheroptiontomeasuredifferentUXaspectsforVUIsisthemodularquestionnaireconceptUEQ+(SchreppandThomaschewski,2019b).Becauseoftheflexibleapproach,researcherscould,forexample,utilizethreevoicequalityscalesmixedwith,say,3outof17otherUEQ+scales.Thereby,theresearcherscreateaquestionnairerelatedtotheirresearchquestionforproduct-specificUXaspectevaluation(Kleinetal.,2020b).ExamplesofotherUEQ+scalesareAttractiveness,Novelty,andEfficiency.Thevoicequalityscalesareconstructedwithconsiderationofhuman-computerinteraction(HCI)andtheVUIdesignprocess(Kleinetal.,2020a).User,system,andcontextallinfluenceHCIsignificantly(HassenzahlandTractinsky,2006).ImprovingtheVUIdesignprocessrequiresadeepunderstandingofcontext,user,andapplicationtodefinerelevantevaluationcriteria(Cohenetal.,2004)butthedefinitionbackthenwasonlytargetingusabilityinsteadoftheholisticUXconcept.Inrecentstudies,mixed-methodsapproacheshavebecomemorepopular(McKim,2017),astheypro-videcertainadvantages.Forexample,mixedmeth-odscanbeappliedinsinglequestionnaireexperi-mentsifthereisaquestionnairewithacombinationofstandardizedandopenquestions(Biermannetal.,2019).Anotherexampleiscomprehensivestudyde-sign(Iniestoetal.,2021),wherethecombinationofstandardizedquestionnairesandsemi-structuredin-terviewsallowstheresearcherstocoverbroaderas-pectsandgainin-depthinformationatthesametime.Ourmixed-methodsapproachaimstoidentifythemissingUXaspectsthatuserstakeintoaccountwhenevaluatingVUIs.3METHODOLOGYOurtargetgroupcomprisesintensiveVUIuserswhouseVUIsregularly,i.e.,fromdailytoseveraltimesaweek,inaprivateorprofessionalenvironment(Kleinetal.,2021).Theyhaveatleastoneyearofusageex-perienceanduseVUIsinvariousscenarios.Hence,intensiveusershavealreadydealtwithVUIsmoredeeplyandcanprovidecomprehensiveinsightsintotheiruse.Weaimtoidentifythetargetgroup’sUXaspectswhenusingVUIs.Forthispurpose,wefor-mulatedthefollowingresearchquestions(RQ):RQ1:Whatisintensiveusers’VUIfrequencyofuse?RQ2:Whatareintensiveusers’reasonsforVUIuse?RQ3:Whatareintensiveusers’UXaspectsforVUI?First,weexplorethefrequencyofuse(RQ1)ofin-tensiveusersbyconsideringshortertimeintervals,asinpreviousliterature.Next,weaskusersabouttheirreasonsforuse(RQ2)torevealtheintensiveusers’usagepatternsandscenarios.Wethenaskedthein-tensiveusers’tosharetheirpositiveandnegativeVUIWEBIST2022-18thInternationalConferenceonWebInformationSystemsandTechnologies386
Table1:Participants’durations,devices,andapplications.ParticipantsDurationofuseDevicesApplicationP13yearsAlexaAccessibility(visual),smarthomecontrolP2>5yearsAlexa,SiriAccessibility(visual),librarianshipP3>10yearsAlexa*,SiriAccessibility(visual),smartphonecontrolP43yearsAlexa,SiriAccessibility(visual),searchqueriesP5>10yearsDragon,SiriAccessibility(visual),workingtool,smartphonecontrolP6>10yearsAlexa,Dragon,SiriAccessibility(motor),workingtool,smartphonecontrolP7>5yearsAlexa,SiriVUIdevelopmentP8>5yearsAlexa,in-carentertainmentSmarthomecontrolP91yearGoogleAssistantTimer,searchqueriesP10>5yearsAlexa,smartphone**Radiosubstitute,(fun)searchqueries(*stoppedusingAlexa,**unknownsmartphonebrand)experiencesaswellassuggestionsforVUIimprove-mentsinordertodeterminenoteworthyUXaspectsforthetargetgroup(RQ3).ToanswertheRQs,wefollowamixed-methodsapproach:weconductaqualitativestudywithsemi-structuredinterviewsfollowedbyaquantitativestudywithanonlinequestionnaire.Thequestionnaireisde-signedtoverifytheresultsoftheinterviewsandtocomparethemwithabroadersampleofparticipants.3.1QualitativeUserStudywithInterviewsWeconductedtensemi-structuredinterviewswithaheterogeneousgroupofintensiveusersinthequalita-tivestudy.Wethenanalyzedthecollecteddatawithaqualitativecontentanalysis(Mayring,1994).3.1.1ProcedureFromApriltoMay2021,weconductedteninter-viewsapplyingthesemi-structuredexpert-interviewmethodology(Bogneretal.,2014).InordertoanswerourthreeRQs,weconstructedtheinterviewguide-linestoconsistofquestionsabouttheparticipants’positiveandnegativeexpectationsandexperiencesre-gardingVUIsaswellastheircontextsofuse.Weruntwopreteststoensurethattheguidelineswereusefulandguiding.Afterwards,wetranslatedtheinterviewguidelinesintoEnglishtoincludeinter-nationalparticipantsandmadeanadditionalversiontointerviewauserwhosechildrenalsousetheVUI.TheinterviewguidelinesareavailableintheoriginallanguageGermanandEnglishtranslationinthere-searchprotocol(K¨ollnetal.,2022).WeconductedtheinterviewsduringonlinevideosessionsusingMicrosoftTeams,or,inonecase,aphonecall.Theinterviewswererecordedandsubse-quentlytranscribedwithasimplescientifictranscript(DresingandPehl,2018)andmadeanonymous.Twointerviewshadtobedocumentedwithamemorylogbecausetherecordingfailed.Alltranscriptionsandmemorylogsareavailableintheresearchprotocolintheiroriginallanguage(K¨ollnetal.,2022).Af-terwards,thecollecteddatawasanalyzedwiththequalitativecontentanalysis(Mayring,1994).3.1.2InterviewParticipantsAllinterviewparticipants(seeTable1)meetourre-quirementsforourtargetgroup,whichare:theyalluseVUIsdailyinaprivateorprofessionalcontextandhaveatleastoneyearofusageexperience(seeTable1columnDurationofuse).WeincludedP7,whoworkswithVUIsdailyinVUIsoftwaredevelop-ment,butdoesnotactuallyconsiderthemselfareg-ularuser.P7respondedtoasocialmediacallontheplatformLinkedIn.Otherparticipants,whowereal-readyknowntouseVUIsmoreintensively,wereac-quiredfromthepersonalnetworksoftheauthors.Theintervieweesareheterogeneous,e.g.,intheircontextsofuseandcharacteristics.FouroutoftenuseVUIsinaprofessionalenvironment,andtheothersixinprivateenvironments.Sixoutoftenparticipantshaveanimpairment,whiletheotherfourhavenone.Theparticipantsare27to69yearsold.P10alsousestheVUIwiththeirchildren,whoarefourandsixyearsold.Twoparticipantsarefemale,eightaremale.MostparticipantsuseAlexa(8/10),followedbySiri(6/10).Thespeechrecognitionsoft-wareDragonisusedasaworkingtool(2/10).Leastusedareanin-carentertainmentsystem,GoogleAssistant,andanunspecifiedsmartphoneVUI(each1/10)(seeTable1columnDevices).Themainusagescenariosoftheparticipantsare:makingtheirdailyliveseasierasuserswithimpair-ments(6/10),smartphonecontrolandsearchqueries(each3/10),andsmarthomecontrol(2/10).Inaddi-tion,therearesomespecificmainapplications,suchaslibrarianship,timer,radiosubstitute,orVUIdevel-opment(each1/10)(seeTable1columnApplication).IdentifyingUserExperienceAspectsforVoiceUserInterfaceswithIntensiveUsers387
3.1.3QualitativeContentAnalysisWeappliedthesummarizingcontentanalysis,oneformofthequalitycontentanalysisthatiswell-knownandmostoftenusedinGerman-speakingcountries(Mayring,1994).Inthesummarizingcontentanalysisprocess(seeFig.1),weanalyzedthetranscriptsusingatech-niquecalled“coding”:first,wedefinedthetranscriptsastheunitsofanalysis.Wethenhighlightedtheinformation-bearingpartsandsummarizedtheirkeymessageonanabstractionlevelthatfitourpurpose.Thesekeymessagesarenowcalled“codes.”Wethenremovedallselectionsthatwerenotrelatedtoourre-searchquestionstomakeafirstreduction,followedbyclusteringcodeswithsimilarkeymessages.TheremainingcodeswerethenclusteredintoacategorysystemthatisthebasisforourlistofUXaspects.Wefinallyrecheckedallinterviewswiththedevelopedcodesysteminasecondroundofcodingtoensureallinterviewswerecodedwiththesameprocedure.Be-causeonlyminorchangesweremadetothecodesys-teminthesecondround,anadditionalcontrolroundwasunnecessary.Figure1:Processofconductingasummarizingcontentanalysisbasedon(Mayring,1994).Followingthismethod,twoauthorsalternatelycodedusingthesoftwaretoolMAXQDAStandard2000(Release20.4.1)inthefollowingway:authorAcodesP1,thenauthorBcodesP2,andsoonun-tilP10isreached.Inthesecondround,theauthorschangedtheparticipants’transcriptions,soauthorAcodedP2,thenauthorBcodedP1,andsoon.3.2QuantitativeUserStudywithaSurveyWeconductedanonlinesurveywithintensiveuserstoobtainmorecomprehensiveresults.Weobtainedanadditionalamountofqualitativedatafromtheques-tionnaires,whichwasalsoanalyzedwithanadjustedqualitativecontentanalysis.3.2.1ProcedureWeconductedasurveywithGerman-,English-andSpanish-speakingparticipantsusingGoogleFormsfromApriltoJune2021.Wedevelopedourquestionnaireasfollows(seeFig.2):first,wedevelopedthecontentbasedontheresearchquestionsandthefindingsoftheinter-views.Thequestionnairecombinesquantitativeandqualitativequestions.Inourfirstpilottest,wepre-sentedoursurveydrafttofourUXexperts.Wemadechanges,e.g.,totheinformativetextsororderofques-tions.Thenwedidthreepretestsconsecutively,eachwiththereworkedversionfromtheprecedingpretest.Aftereachpretest,wemostlyjustmadechangesinthewordingsinordertohelptheparticipantstobet-terunderstandthequestions.Ourfinalquestionnairecontains19questionsabouttheexperiencesandex-pectationsoftheVUIusersregardingtheVUIs.ThecompletequestionnairecanbefoundintheresearchprotocolinEnglish,German,andSpanishversions(K¨ollnetal.,2022).Figure2:Thedevelopmentprocessofthequestionnaire.ThesurveywasthensharedonthesocialmediaplatformsLinkedIn,Facebook,andTwitteraswellasthroughthepersonalnetworksoftheauthors.Contactsthenalsosharedthesurveywiththeircontactsandontheirsocialmediachannels.Werepeatedthecalltoparticipateafewtimestogainadditionalparticipants.Forthequalitativecontentanalysis,wepartlyad-justedthesummarizingcontentanalysistoourre-searchneeds:wedidnotbuildanewcodesystembutusedthecodesystemthatwehaddevelopedfortheWEBIST2022-18thInternationalConferenceonWebInformationSystemsandTechnologies388
interviews.Hence,wewereabletomatchtheresultsofthesurveywithourfindingsfromtheinterviews.3.2.2SurveyParticipantsWecollected76participantresponsesandexcluded24duetothefollowingreasons:oneduplicate,fiverecordshadfewerthanthreequestionsanswered,and18participantsdidnotmeetthetargetgrouprequirementsofahighfrequencyofuse.Fortheanalysis,wetook52participantsintoaccount.Wefoundthat69%(36/52)ofthesurveyparticipantsweremale,29%(15/52)werefemale,and2%(1/52)didnotanswerthequestion.Theaverageageofthesurveyparticipantsis43(SD13).Theparticipantsusediversesoftwareanddevices(seeFig.3).Ofallnameddevices,Alexaisthemostcommonlyused(37/52),followedbySiri(29/52).ThethirdmostuseddeviceoftheparticipantsisGoogleAssistant(20/52).ThiskeywordcombinesallmentionsofGoogleFigure3:Devicesusedbythetargetgroup(N=52).VUIsbecausetheparticipantswerenotalwaysclearaboutwhichGoogledevicetheyused(somewroteGoogle,GoogleVoice,orevenGoogleHome,orGoogleAssistant?”).Voice-controllednavigationorentertainmentsystemsofcars,disregardingthemanufacturerofthecar,weresummarizedasin-carentertainment(7/52).LeastfrequentlynamedwerethespeechrecognitionsoftwareDragon(3/52)aswellasCortana(3/52),followedbyafewotherVUIsthatwereeachnamedbymax.2participants(13/52).WhileweidentifiedAlexaandSiriasthemostcommonlyusedVUIsamongourparticipants,arepresentativeGermanstudy(N=3184)foundthatGoogleAssistant(12%)andAlexa(9%)arethemostcommonlyused(Tasetal.,2019).Thismaydifferfromourparticipants,butsincewedidnotlookforbrand-specificevaluations,wedonotexpectsignificantdiscrepanciesinourresults.4RESULTS&DISCUSSIONWeprocessedourcollecteddataaccordingtothedescriptionintheprevioussectionswiththecon-tentanalysis(Mayring,1994).Hence,thequali-tativedatafrombothstudieswereanalyzedusingthedataanalysissoftwareMaxQDAStandard2020(Release20.4.1)bothwiththeoperatingsystemMi-crosoftWindows10.QuantitativedatawereanalyzedinMicrosoftExcel(Version2204).Weconsider62participantsforthequalitativeandquantitativestudy,71%male(n=44),27%female(n=17),and2%(n=1)whodidnotanswer.Al-thoughourstudyisnotrepresentative,itsdistributionisinlinewithcurrentliterature(77%(PyaeandJoels-son,2018),72%(Sciutoetal.,2018),and79%(Kleinetal.,2021)maleparticipants).Asinotherstudies,ourdistributionofgenderamongourparticipantsisbiasedtowardsmaleusersforVUIs.Wefirstreportourinterviewresults(n=10)andthenthesurveyresults(n=52)ineachsection.Quo-tationsaretranslatedfromtheoriginallanguageGer-man,English,orSpanish,andtheoriginalquotesareavailableintheresearchprotocol(K¨ollnetal.,2022).4.1WhatIsIntensiveUsers’VUIFrequencyofUse?Eventhoughallinterviewparticipants(n=10)meetourdefinitionforintensiveusers,considerablediffer-encesinthescopeofthefrequencyofusewerere-portedbytheparticipants.Therefore,weaskedthesurveyparticipants(n=52)tobemorespecificabouttheirfrequencyofuse.Thatiswhywehavesubdi-videdtheansweroptionsfordailyuseintothreeop-tions:lessthananhouraday,afewhoursaday,andmorethanfivehoursaday.Outofthesurveypartici-pantswhousetheirVUIdaily,mostuseitlessthananhouraday(29/52).Thisisfollowedbyafewhoursaday(9/52)andmorethanfivehoursaday(4/52).FewersurveyparticipantsusetheirVUIafewtimesaweek(8/52)andsomeVUIdevelopersuseitnotreg-ularly(2/52)(seeFig.4).Anotherstudy,whichalsoexaminedthefrequencyofuse,foundthat76.6%oftheidentifiedinten-siveusersusedVUIonadailybasis(Kleinetal.,2021),whichisinlinewithourresults.However,theintensiveuserswereonlydistinguishedbetweenapproximatelyonceadayandseveraltimesaday.Apopulation-representativestudyconductedinGer-manyin2019revealed11%ofdailyVUIusersand19%severaltimesaweek,resultingin30%intensiveusers(SPLENDIDRESEARCHGmbH,2019).OurfindingsareinlinewiththeseresultsandIdentifyingUserExperienceAspectsforVoiceUserInterfaceswithIntensiveUsers389
showthesurveyparticipants’distributionforVUIfre-quencyofuseinmoredetail.Figure4:Frequencyofuseofthesurveyparticipants(N=52).ThefrequencyofusedistributioncouldhaveitsorigininthedifferentusagescenariosoftheVUIs.OnetypicalusagescenarioofAlexais,e.g.,thetimerfunction.Thisonlytakesafewsecondstoexecute.However,ifauserusesDragontodictatetheiremails,theysometimesusethedevicethewholedayasatoolatworkorevenintheirprivatetime.Therefore,thefrequencyofuseisconnectednotonlytotheVUIsystem,butalsotothecontextofuse.Foramorepreciseanalysis,thecontextneedstobeconsideredinthedesignofVUIs.4.2WhatAreIntensiveUsers’ReasonsforVUIUse?TheinterviewparticipantsnamedvariousreasonsfortheirVUIusage.P2describeshow,asablindperson,usingVUIsgivesthemtheopportunitytodothingsforwhichtheyhavenoeasyalternativesolution.Forexample,theyusesAlexatoorderaudiobooksfromthelibrarysincethelibrarywebsitedoesnothaveanaccessiblecheckout.Duetomotorimpairment,P6usesDragonasatoolatworktowritemostoftheiremailcorrespondences.P8usesaVUIoutofcon-venienceforthenavigationsystemintheircarandtocontroltheirsmarthome.P10describeshowtheirchildrenuseAlexaforfun.Sinceweidentifiedmultiplepossiblereasonsforuse,weformedreasons-for-usecategoriestowhichthesurveyparticipantscouldassignthemselves(seeFig.5).Itwaspossibletoselectmultiplecategoriesandtogivecustomanswers.MostofthesurveyparticipantsuseVUIsforcom-fort(48/52).Thiswasfollowedbyfun(23/52)andasatoolatwork(12/52).AfewuseVUIsbecausetheydoscientificresearchonVUIs(3/52)orbecauseofsomekindofimpairment,e.g.,motor(3/52)orvisual(1/52).Thisismostlyinlinewiththeanswersoftheinterviewparticipants.AlmostallofthemmentionvariousscenariosinwhichtheVUIgrantsthemcom-fort.Participantswithimpairments,especiallymen-tionseveralpositiveexperienceswiththeirVUIs:P1explainsthatnothavingtostandupfromthesofatoturnthelightsonoroffisespeciallycomfortableforthem,sincetheyhavelowvision.P2says,Ofcourse,Idon’tneeditinmylife,butasablindperson,Icanbenefitgreatlyfromsmartphonesandsuchassis-tancesystems.Similarly,P3saysthatasablindperson,typingonanIphoneisjustawkward.Funisalsomentionedseveraltimesbytheinterviewpar-ticipants:thechildrenofP10mostlyuseAlexatoaskfunquestions,andP2likestoteaseAlexawithcheekyquestionsandlistentoheranswers.However,wedohavemoreinterviewthansurveyparticipantsthatuseVUIsbecauseofsomekindofimpairment.Inad-dition,wehavenointerviewparticipantsthatdosci-entificresearchonVUIs,unlikesomeofthesurveyparticipants.Figure5:ReasonsforusingVUIsofthetargetgroup(N=52).Existingliteraturehasinvestigatedthecontextofuse,butnotthemotivationfortheusagescenarios(Kleinetal.,2021).However,themotivationforusealsohasamajorinfluenceonwhatisimportanttotheusers.(Hassenzahl,2008)callsthesethedo-andbe-goalsoftheusers.Thedo-goalsdescribewhattheuserwantstodo,whilethebe-goalsdescribehowtheuserwantstofeel.Theresultsofourstudyprovideinsightsintothesedo-andbe-goalsofthetargetgroup,e.g.,theparticipantswanttouseVUIsforsmarthomecontrol(do-goal),ortostaycomfortablyonthesofa(be-goal).WehavefoundthatuserswithdisabilitiesperceivegreatpotentialforVUIstoassistthemintheirdailylives.Currently,participantsexploreVUIfeatureswithafocusoncomfortandfun.AfewinterviewparticipantsstatedthattheywouldliketouseVUIsforevenmorepracticalapplicationswhenVUIsorlinkedtechnologieshaveamoreversatileskillsetavailable,e.g.,usingVUIsinautonomousdrivetogivedirectionwouldbeidealforP2.WEBIST2022-18thInternationalConferenceonWebInformationSystemsandTechnologies390
Table2:TheidentifiedaspectsthetargetgroupnamedforevaluatingUXofVUIs.IndexAspectInterpretationoftheparticipantsInt.(N=10)*Sur.(N=52)*1ComprehensionTheVUIunderstandstheusercorrectly,eveniftheydonotspeakveryclearly.10372Error-freeBoththeresultandtheoperationdonotgiveerrors,wronganswersormisunderstandings.6343AestheticThehardwareoftheVUIissupposedtobeminimalistic.Visualfeedbackaboutthestatus(listening,processing,disabled,etc.)ispositivelyreceivedaslongasitisdiscreet.3354RangeoffunctionsTheVUIhasasmanyfunctionsandapplicationpossibilitiesaspossible.4295SimplicityTheoperationiseasytoperformandcontainsasfewstepsaspossible.8236EffectivityTheuserreachestheirgoal.9177SupportoftheUserTheVUIhelpsuserstoachievetheirgoals.3228HumanityTheuserhasthefeelingoftalkingtoahumanbeing.Theycanconductanormaldialogue,theVUIpersonarespondswithhumorandempathy,andthevoicesoundsnatural.4219PersonalfulfillmentTheVUIallowstheusertoliveouttheirpersonality.Theycanspeakintheirdialectanddonothavetodisguisethemselvesinordertobebetterunderstood.32110Context-sensitivityTheVUIknowsitsuser,understandsthecurrentsituation,andcanrememberthecontextoftheconversation.42011EfficiencyTheuserreachestheirgoalwithoutdetours.71712PrivacyTheVUIshouldnotpermanentlylistenin,interrupt,orevenrecordprivateconversations.61613DataSecurityIfpersonalinformationmustbeprovided,itcanbetrustedthatitwillnotbesharedandwillbehandledethically.71514Time-savingTheuserdoesnotneedtofetchadeviceorpressabuttonandcanimmediatelystarttheusageprocess.Theyreceivestheresultsimmediatelyaftertherequest.61415PolitenessTheVUIdoesnotinsulttheuser;itallowsthemtofinishtheirsentencesanddoesnotactivatewithoutbeingasked.01916Linkingwiththird-partyproductsManythird-partyproductsshouldbecompatible.TheVUIcaneasilybeconnectedwiththem,andtherearenoerrorsincommunication.81117SafetyTheVUIgivestheuserphysicalandprivacysecurity.Forexample,securityisgivenbyenablingoperationinthecarwithoutremovingthehandsfromthesteeringwheel,orbyprotectingthedatafromexternalaccess.21618CapabilitytolearnTheVUIcanlearnnewcommands,learnthepersonalityofitsuser,andexerciseappropriatereactions.Incorrectlylearnedcommandscanbedeleted.21519IntuitivenessTheuserdoesnotneedtolearnacomplicatedvocabulary,butcanimmediatelycommunicatewiththeVUIusingtheireverydaylanguage.SettingupandlearninghowtousetheVUIispossiblewithoutadditionalhelp.51220PracticalityTheVUIhelpstheuserwitheverydaychallenges.71021ReliabilityTheVUIrespondsonlywhenitisaddressed,withoutfalseactivation.Theresultsarecorrectandverified.Thequalityoftheinteractionshouldbeconsistentlyhigh.01522HelpwitherrorsIfanerroroccurs,awaytofixitisshown.Thereisahelpfunction.31123ConvenienceTheusercanusetheVUIfromanysituationwithouthavingtomakeaneffort.Forexample,theycanuseitfromthesofa,bedordesk.7724FunTheVUIisfuntouseanditshumorisappropriate.3825CustomizabilityThepersonaoftheVUIcanbesetbytheuseraccordingtotheirpreferences(gender,language,humor,voice,etc.).0826FlexibilityTheVUIcanadapttodifferentusersandsituations.4427VoiceThevoiceoftheVUIispleasantandclearlyunderstandable.0728ResponsivenessTheVUIrespondsassoonasitisaddressed,butonlywhenitisaddressed.0629IndependencyTheuserdoesnotneedanyassistanceinusingtheVUI.ItallowsadditionalindependenceforuserswhowouldhaveproblemsoperatingaGUI(forexample,thosewithvisualormotorimpairment,dyslexics,andchildren).4230InnovationTheVUIhasnew,modernanduniquefeatures.2131Ad-FreeAdvertisingisnotplayedorcanbeturnedoff.0232LongevityTheVUIcanbeusedforalongtime,doesnotbreakquickly,anddoesnotneedtoberepeatedlyreplacedwiththelatestmodel.02Theaspectsaresortedbythetotalnumberofmentions.*Mentionedbyparticipantsfromeithertheinterviewsorsurveys.4.3WhatAreIntensiveUsers’UXAspectsforVUI?Intotal,weidentified32UXaspectsofintensiveusersforVUIs(seeTable2).Wespecifythemthroughthestatementsoftheinterviewparticipants,whichareavailableintheresearchprotocol(K¨ollnetal.,2022).Thisway,theycanbecomparedwithexistingscalesandtermsbasedonthecorestatementsoftheusers,withoutrequiringadditionalassociationsfromprecedingresearch.Intheresults,weconsideronlyasinglementioningperparticipantofaUXaspect.Thenumberofmentionsbyasingleparticipantisnotnecessarilyameasureofimpor-tance.Emphasis,context,andwordingalsoprovideinformationonpriority,butarehardlymeasurable.Forthisreason,theevaluationwasbasedonthenumberofinterview(seeTable2,columnInt.)andsurveyparticipants(seeTable2,columnSur.)whomentionedaUXaspect.Duetothesmallnumberofparticipants,wehavedecidednottosetaminimumnumberofparticipantswhomusthavenamedaUXaspect.Itispossiblethatthedistributionofnumberscouldbedifferentwithalargergroupofparticipants.AfewofouridentifiedUXaspectshadalreadybeendefinedthroughoutestablishedliterature,e.g.,EfficiencyandEffectivity(ISO9241-210,2019)orAesthetic(SchreppandThomaschewski,2019a),butnotnecessarilyforVUIs.OtherUXaspectsarepartofotherknownUXfactors,e.g.,SimplicityandPolite-ness,whichmaybepartoftheUXfactorLikeabilityIdentifyingUserExperienceAspectsforVoiceUserInterfaceswithIntensiveUsers391
oftheSubjectiveAssessmentofSpeechSystemInter-faces(SASSI)(HoneandGraham,2000),butarenotexplicitlyconsidered.Additionally,wedididentifyseveralnewUXaspectsforVUIs,e.g.,IndependencyorContext-sensitivity.InsteadoffollowingatheoreticalapproachonwhataUXaspectshouldmean,wefollowedamoreuser-centeredapproachtoidentifyUXaspectsforVUIs.TheseUXaspectsrepresentwhatintensiveusersthinkaboutwhenevaluatingtheUXofVUIsandcanbeused,e.g.,tochoosethebestUEQ+scalestocombinewiththeVUIscales(Kleinetal.,2020b)4.4LimitationsOurstudyislimitedtoasmallergroupofparticipants,whichisputintoperspectiveusingamixed-methodsapproach.Despitebeinglimitedtoteninterviews,thisstudyprovidesmeaningfulresultsinthequalitativesection,confirmedbythequantitativepartofthissur-vey.Belowasamplesizeoftwentyinterviews,datasaturationusuallyappear(Francisetal.,2010).Wealsohavetoconsider,thatourstudyhasagen-derbiastowardsmaleparticipants.Theunstablegen-derdistributionoftheparticipantsisbecause,e.g.,femalesuseVUIslessthanmales(SPLENDIDRE-SEARCHGmbH,2019;Tasetal.,2019).Thepartic-ipantsareheterogeneousconcerningtheVUIusageinprofessionalandprivatelocationsaswellaspar-ticipantswithorwithoutimpairments,sowedonotexpectrelevantselectionbias.Althoughthisstudywasinternationallycon-ducted,mostofourparticipantsarefromtheWest-ernEuropeanregion,specificallyGermanyandSpain.Theresultsofthestudymaybedifferentwithmoreparticipantsfrom,e.g.,theAsianorAfricanregions.5CONCLUSION&FUTUREWORKWeexploredtheusagebehavioraswellastheexpec-tationsandexperiencesofintensiveusersofVUIs.Thisallowedustomakestatementsaboutthefre-quencyandreasonsofuseforthetargetgroupaccord-ingtoouruser-centered,mixed-methodsapproach.Additionally,wewereabletodeterminewhichUXaspectsthetargetgroupappliestoevaluateVUIs.WehavefoundthatmanyintensiveusersnotonlyusetheirVUIalmostdaily,butoftenevenforsev-eralhoursaday.MostofthetargetgroupuseVUIsforcomfortandtomaketheirdailyliveseasier,e.g.,intheirsmarthome.ParticularlynoteworthyisthepotentialofVUIsinsupportinguserswithdisabili-ties.Wecreatedalistwith32UXaspectsforVUIsofthetargetgroup.Althoughsomeofthetermsarealreadyknown,weexplaintheUXaspectsfromtheuser’spointofviewandwhattheyexpectfromaVUIregardingthisUXaspect.Additionally,wewereabletoidentifyseveralnewUXaspectsforVUIs.PrioritizationoftheseUXaspectsshouldstillbeperformedinthefuture.WewillusetheUXas-pectsforVUIsinfutureworktodeterminewhichUXmeasurementmethodconsiderswhichUXaspect.Thereby,VUIdesignerscanchooseaUXmeasure-mentmethodthatfitstheirusers’needs.Forexample,acomparisonwiththeUEQ+scalescouldshowthetotalscalesneededtoassesstheUXofaVUI.Thiswouldallowresearcherstobetteradapttheirmethod-ologytotheUXaspecttheywishtoevaluate.REFERENCESBiermann,M.,Schweiger,E.,andJentsch,M.(2019).Talk-ingtoStupid?!?ImprovingVoiceUserInterfaces.InFischer,H.andHess,S.,editors,MenschundComputer2019-UsabilityProfessionals,pages1–4,Bonn.Gesellschaftf¨urInformatike.V.UndGermanUPAe.V.Bogner,A.,Littig,B.,andMenz,W.(2014).InterviewsmitExperten:EinepraxisorientierteEinf¨uhrung.SpringerFachmedienWiesbaden,Wiesbaden.Cohen,M.H.,Giangola,J.P.,andBalogh,J.(2004).VoiceUserInterfaceDesign.Addison-Wesley,Boston.Dresing,T.andPehl,T.(2018).PraxisbuchInterview,Tran-skription&Analyse,Audiotranskription.Dr.DresingundPehl,Marburg.Francis,J.J.,Johnston,M.,Robertson,C.,Glidewell,L.,Entwistle,V.,Eccles,M.P.,andGrimshaw,J.M.(2010).Whatisanadequatesamplesize?Opera-tionalisingdatasaturationfortheory-basedinterviewstudies.Psychology&Health,25(10):1229–1245.Hassenzahl,M.(2008).Userexperience(ux):Towardsanexperientialperspectiveonproductquality.IHM,September2008:11–15.Hassenzahl,M.andTractinsky,N.(2006).Userexperience-aresearchagenda.Behaviour&InformationTech-nology,25(2):91–97.Hone,K.S.andGraham,R.(2000).Towardsatoolforthesubjectiveassessmentofspeechsysteminterfaces(sassi).NaturalLanguageEngineering,6(3&4):287–303.Iniesto,F.,Coughlan,T.,andLister,K.(2021).Imple-mentinganaccessibleconversationaluserinterface.InVazquez,S.R.,Drake,T.,Ahmetovic,D.,andYaneva,V.,editors,Proceedingsofthe18thInternationalWebforAllConference,pages1–5,NewYork,NY,USA.ACM.ISO9241-210(2019).Ergonomicsofhuman-systeminteractionPart210:Human-centredde-WEBIST2022-18thInternationalConferenceonWebInformationSystemsandTechnologies392
signforinteractivesystems.Technicalreport,https://www.iso.org/committee/53372.html.Klein,A.M.,Hinderks,A.,Schrepp,M.,andThomaschewski,J.(2020a).ConstructionofUEQ+ScalesforVoiceQuality.InProceedingsoftheConferenceonMenschUndComputer,MuC’20,pages1–5,NewYork,NY,USA.AssociationforComputingMachinery.Klein,A.M.,Hinderks,A.,Schrepp,M.,andThomaschewski,J.(2020b).MeasuringUserExperienceQualityofVoiceAssistants.In202015thIberianConferenceonInformationSystemsandTechnologies(CISTI),pages1–4,Seville,Spain.IEEE.Klein,A.M.,Rauschenberger,M.,Thomaschweski,J.,andEscalona,M.J.(2021).ComparingVoiceAssistantRisksandPotentialwithTechnology-BasedUsers:AStudyfromGermanyandSpain.JournalofWebEn-gineering,7(16):1991–2016.Kocaballi,A.B.,Laranjo,L.,andCoiera,E.(2019).Un-derstandingandmeasuringuserexperienceincon-versationalinterfaces.InteractingwithComputers,31(2):192–207.K¨olln,K.,Deutschl¨ander,J.,Klein,A.M.,Rauschenberger,M.,andWinter,D.(2022).Protocolforidentifyinguserexperienceaspectsforvoiceuserinterfaceswithintensiveusers.Langevin,R.,Lordon,R.J.,Avrahami,T.,Cowan,B.R.,Hirsch,T.,andHsieh,G.(2021).Heuristicevaluationofconversationalagents.InKitamura,Y.,Quigley,A.,Isbister,K.,Igarashi,T.,Bjørn,P.,andDrucker,S.,editors,Proceedingsofthe2021CHIConferenceonHumanFactorsinComputingSystems,pages1–15,NewYork,NY,USA.ACM.Mayring,P.(1994).Qualitativecontentanalysis,volume1994.UVKUniv.-Verl.Konstanz.McKim,C.A.(2017).Thevalueofmixedmethodsre-search:Amixedmethodsstudy.JournalofMixedMethodsResearch,11(2):202–222.Meiners,A.-L.,Kollmorgen,J.,Schrepp,M.,andThomaschewski,J.(2021).Whichuxaspectsareim-portantforasoftwareproduct?importanceratingsofuxaspectsforsoftwareproductsformeasurementwiththeueq+.InMenschUndComputer2021,MuC’21,page136139,NewYork,NY,USA.AssociationforComputingMachinery.Preece,J.,Rogers,Y.,andSharp,H.(2002).Interactiondesign:beyondhuman-computerinteraction.InInter-actiondesign:beyondhuman-computerinteraction.J.Wiley&Sons,NewYork.Pyae,A.andJoelsson,T.N.(2018).Investigatingtheus-abilityanduserexperiencesofvoiceuserinterface.InProceedingsofthe20thInternationalConferenceonHuman-ComputerInteractionwithMobileDevicesandServicesAdjunct,pages127–131,NewYork,NY,USA.ACM.Rauschenberger,M.(2021).AcceptancebyDesign:VoiceAssistants.In1stAI-DEbateWorkshop:work-shopestablishingAnInterDisciplinarypErspectiveonspeech-BAsedTEchnology,page27.09.2021,Magde-burg,Germany.OvGU.Schrepp,M.andThomaschewski,J.(2019a).Constructionandfirstvalidationofextensionscalesfortheuserex-periencequestionnaire(ueq).Schrepp,M.andThomaschewski,J.(2019b).DesignandValidationofaFrameworkfortheCreationofUserExperienceQuestionnaires.InternationalJour-nalofInteractiveMultimediaandArtificialIntelli-gence,5(7):S.88–95.Sciuto,A.,Saini,A.,Forlizzi,J.,andHong,J.I.(2018).heyalexa,whatsup?:Amixed-methodsstudiesofin-homeconversationalagentusage.InProceedingsofthe2018DesigningInteractiveSystemsConference,DIS18,page857868,NewYork,NY,USA.Associa-tionforComputingMachinery.Seaborn,K.andUrakami,J.(2021).Measuringvoiceuxquantitatively.InKitamura,Y.,Quigley,A.,Isbister,K.,andIgarashi,T.,editors,ExtendedAbstractsofthe2021CHIConferenceonHumanFactorsinComput-ingSystems,pages1–8,NewYork,NY,USA.ACM.SPLENDIDRESEARCHGmbH(2019).DigitaleSprachassistenten.StrategyAnalytics(2021).Absatzvonintelligentenlaut-sprechernweltweitvom3.quartal2016biszum3.quartal2021.Tas,S.,Hildebrandt,C.,andArnold,R.(2019).VoiceAs-sistantsinGermany.WIKWissenschaftlichesInstitutf¨urInfrastrukturundKommunikationsdiensteGmbH,BadHonnef,Germany.Nr.441.Wei,Z.andLanday,J.A.(2018).Evaluatingspeech-basedsmartdevicesusingnewusabilityheuristics.IEEEPervasiveComputing,17(2):84–96.Winter,D.,Hinderks,A.,Schrepp,M.,andThomaschewski,J.(2017).WelcheUXFak-torensindf¨urmeinProduktwichtig?InHess,S.andFischer,H.,editors,MenschundComputerMuC2017.Gesellschaftf¨urInformatike.V.unddieGermanUPAe.V.IdentifyingUserExperienceAspectsforVoiceUserInterfaceswithIntensiveUsers393
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Voice Assistants (VAs), Speech-based Technology (SBT), and Voice User Interfaces (VUI) have gained popularity in recent years; however, privacy concerns stop people from using them. Although the terms are used for slightly di?erent purposes, we mainly use "VAs" for simplicity. To increase the adoption rate and address users’ needs, we propose reducing barriers with Acceptance by Design. By that, we mean using, e.g., VA designs that implicitly show if they collect data or how much data is collected. We can do that by ?rst exploring the user requirements and concerns with methods from the Human-Centered Design Process [4] to provide guidelines for the future. This is a complex challenge that requires a deep understanding from di?erent disciplines to design VAs for users skeptical about them. The HCD o?ers a holistic approach for capturing user requirements or discovering new dimensions of the UX that in?uence the user’s perception or awareness of the interactive system. We must keep in mind that users have di?erent needs (e.g., children vs. parents vs. people with disabilities) that could result in di?erent design solutions. Research has already started to explore with questionnaires [9, 10] and interviews the risks and opportunities of VAs (e.g., cultures [6, 22], use cases [1, 3], existing users, or potential users [1, 3, 22]). Furthermore, these guidelines might help others to design future interactive systems for di?erent use cases, such as augmented reality glasses for university students in labs.
Conference Paper
Full-text available
Questionnaires are a popular method to measure User Experience (UX). These UX questionnaires cover different UX aspects with their scales. However, UX includes a huge number of semantically different aspects of a user's interaction with a product. It is therefore practically impossible to cover all these aspects in a single evaluation study. A researcher must select those UX aspects that are most important to the users of the product under investigation. Some papers examined which UX aspects are important for specific product categories. Participants in these studies rated the importance of UX aspects for different product categories. These categories were described by a category name and several examples for products in this category. In principle, the results of these studies can be used to indicate which UX aspects should be measured for a particular product in the corresponding product category. This is especially useful for modular frameworks, e.g., the UEQ+, that allow to create a questionnaire by selecting the relevant scales from a catalog of predefined scales. In this paper, it is investigated how accurate the UX aspect suggestions derived from category-level studies are for individual products. The results show that the predicted importance of a UX aspect from the category is fairly precise.
Conference Paper
Full-text available
The UEQ+ is a modular framework for the construction of UX questionnaires. The researcher can pick those scales that fit his or her research question from a list of 16 available UX scales. Currently, no UEQ+ scales are available to allow measuring the quality of voice interactions. Given that this type of interaction is increasingly essential for the usage of digital products, this is a severe limitation of the possible products and usage scenarios that can be evaluated using the UEQ+. We describe in this paper the construction of three specific scales to measure the UX of voice interactions. Besides, we discuss how these new scales can be combined with existing UEQ+ scales in evaluation projects. CCS CONCEPTS • Human-centred computing • Human computer interaction • HCI design and evaluation methods
Conference Paper
Full-text available
Voice User Interfaces have become part of our daily life by being integrated in smartphones, computers, smart home devices and many other consumer products. However, despite their potential, voice assistants like Alexa, Google Assistant or Siri are not that widely used. Why is that? What are their pain points? How can the interaction and dialog flow between the user and the voice user interface (UI) be improved? In a research and development project at designaffairs, insights from user research were used to develop an interaction and dialog concept for a voice UI. The concept draft was refined with user feedback to develop a software prototype. This early voice UI prototype was then evaluated in a user test which demonstrated its potential in better satisfing the user’s wishes for a natural dialog flow. Overall, the iterative user centered approach of the project revealed crucial pain points of the human-machine interaction and further opportunity areas to meet the user’s need.
Experiment Findings
Full-text available
The UEQ (Laugwitz, Schrepp & Held, 2008) is a frequently used questionnaire that measures user experience (short UX) on 6 distinct scales (Attractiveness, Efficiency, Perspicuity, Dependability, Stimulation, Novelty). Of course, these 6 scales do not cover the entire spectrum of UX. For some products special UX aspects not contained in the UEQ are of high importance for the overall UX impression. For this reason, some authors already created extension scales for the UEQ. To cover a broader range of UX we describe the construction and first validation of several additional extension scales. This research report gives a detailed description of the data analysis done for scale construction and first validations of the extension scales.
Article
Full-text available
Although various methods have been developed to evaluate conversational interfaces, there has been a lack of methods specifically focusing on evaluating user experience. This paper reviews the understandings of user experience (UX) in conversational interfaces literature and examines the six questionnaires commonly used for evaluating conversational systems in order to assess the potential suitability of these questionnaires to measure different UX dimensions in that context. The method to examine the questionnaires involved developing an assessment framework for main UX dimensions with relevant attributes and coding the items in the questionnaires according to the framework. The results show that (i) the understandings of UX notably differed in literature; (ii) four questionnaires included assessment items, in varying extents, to measure hedonic, aesthetic and pragmatic dimensions of UX; (iii) while the dimension of affect was covered by two questionnaires, playfulness, motivation, and frustration dimensions were covered by one questionnaire only. The largest coverage of UX dimensions has been provided by the Subjective Assessment of Speech System Interfaces (SASSI). We recommend using multiple questionnaires to obtain a more complete measurement of user experience or improve the assessment of a particular UX dimension. RESEARCH HIGHLIGHTS Varying understandings of UX in conversational interfaces literature. A UX assessment framework with UX dimensions and their relevant attributes. Descriptions of the six main questionnaires for evaluating conversational interfaces. A comparison of the six questionnaires based on their coverage of UX dimensions.
Article
Currently, voice assistants (VAs) are trendy and highly available. The VA adoption rate of internet users differs among European countries and also in the global view. Due to speech intelligibility and privacy concerns, using VAs is challenging. Additionally, user experience (UX) assessment methods and VA improvement possibilities are still missing, but are urgently needed to overcome users’ concerns and increase the adoption rate. Therefore, we conducted an intercultural study of technology-based users from Germany and Spain, expecting that higher improvement potential would outweigh concerns about VAs. We investigated VA use in terms of availability versus actual use, usage patterns, concerns, and improvement proposals. Comparing Germany and Spain, our findings show that nearly the same amount of intensive VA use is found in both technology-based user groups. Despite cultural differences, further results show very similar tendencies, e.g., frequency of use, privacy concerns, and demand for VA improvements.