Science topic

Information Privacy - Science topic

Explore the latest questions and answers in Information Privacy, and find Information Privacy experts.
Questions related to Information Privacy
  • asked a question related to Information Privacy
Question
3 answers
In the context of the rapid development of artificial intelligence technology, how can ethics help us address the social impacts of algorithmic bias, data privacy, and automation? What measures do you think should be taken to ensure that the development of AI complies with ethical principles?
Relevant answer
Answer
To ensure ethical compliance, organizations must implement comprehensive measures, including diverse dataset usage, regular bias audits, clear regulatory guidelines, and mandatory ethics training for developers. Additionally, establishing strong governance structures will help balance technological advancement with social responsibility, ensuring AI development aligns with human values and promotes equitable outcomes for all members of society.
  • asked a question related to Information Privacy
Question
3 answers
The Internet of Things (IoT) is poised to transform industries in 2025, driving automation, enhancing efficiency, and connecting billions of devices across diverse sectors. With the support of 5G and emerging 6G technologies on the horizon, IoT will leverage ultra-low latency and expanded bandwidth to enable real-time applications, from autonomous transport systems to smart city infrastructures. Industrial IoT (IoT) will play a crucial role in advancing Industry 4.0, powering predictive maintenance, automation, and streamlined production processes. Additionally, the integration of AI within IoT systems will bring intelligent cybersecurity as adaptive mechanisms become essential for real-time threat detection across critical infrastructures. Environmental sustainability will be another focal point, with IoT sensors providing data to manage resources such as energy and water, reducing waste and supporting climate initiatives. By embracing decentralized IoT frameworks like blockchain, IoT in 2025 will address data privacy and interoperability challenges, ensuring secure, cohesive networks that can drive innovation across all sectors.
To further develop and implement this proposed Industry 4.0 model, we are actively seeking funding and collaboration with industry and academic partners. This collaboration will enable us to refine and scale the model, leveraging resources and expertise to achieve impactful results. Our aim is to showcase these advancements at an IEEE Conference in the USA, contributing to the body of knowledge and setting new standards in the field.
Regards
Kazi Redwan
Lead, Tech Wings Lab
Relevant answer
Answer
Eduard Babulak Sir, we are working on this (automated threat defense system) for industry, city and also for smart home. Already we proposed a model and we have submitted it on a prestigious IEEE conference (IEEE ICAISC 2025, Saudi). You can advise us as well as help us to grow.
  • asked a question related to Information Privacy
Question
4 answers
The last couple of months a significant number of RG members were “harassed” by scammers with non-sensical requests/questions. With the purpose to extract personal info from you, check the power of AI generated responses, or whatever their purpose might be.
Because of this RG changed their policy that you now can only message those who are following you.
So, the approach of the scammers is now: create a fake account, ask a pseudo-legit question and start to follow those who react (and they hope that you will follow them back) and subsequently ask again a non-sensical request/question. Or when you answer they reply and ask for connection to LinkedIn etc.
Check the profile and so far, all of them have:
0 papers
RG interest score --.
1 question
Conclusion: fake account and click on Report (and report to RG that you suspect that this is a fake profile).
Relevant answer
Answer
I have just been bombarded with fake 'recommendations'. An 'article' was listed twice in RG, one of which lists the British Journal of Aesthetics as the publication source. I 'followed' it to read later. Within hours I received several 'recommendations', spiking quickly within a few days. I had time this morning to check out one of the 'articles' more carefully, and it even has a fake DOI. I've been asked for full texts, and approached on LinkedIn. And this morning I got an email which said that a colleague 'found me on Research Gate', and I was being asked to publish with their 'World Culture' series. And I am now stuck with a load of recommendations that are clearly questionable as the profile from which the recommendation stems has nothing to do with my research area - and is also fake or impersonating someone.
  • asked a question related to Information Privacy
Question
3 answers
How do i write an abstract on my research topic called "Analyzing the concept of Data Privacy in big data analytics in banking?
Relevant answer
Answer
Here's a structured approach to writing an abstract for your research on "Analyzing the Concept of Data Privacy in Big Data Analytics in Banking." Follow this outline to ensure you cover all essential components:
Abstract Outline
  1. Background:Introduce the significance of big data analytics in the banking sector and the growing importance of data privacy.
  2. Objective:Clearly state the main aim of your research (e.g., to analyze the implications of data privacy within big data analytics practices in banking).
  3. Methods:Briefly describe the methodology you used (e.g., qualitative analysis, case studies, surveys, or literature review).
  4. Findings:Summarize key findings related to data privacy challenges, regulatory frameworks, or best practices in the banking industry.
  5. Conclusion:Highlight the implications of your findings for banks and policymakers, and suggest areas for future research.
  • asked a question related to Information Privacy
Question
3 answers
What are the potential risks of Artificial Intelligence (AI) in higher education, particularly concerning data privacy, bias and the digital divide?
How can these risks be mitigated?
Relevant answer
Answer
It was my pleasure, Md. Afroz Alam.
  • asked a question related to Information Privacy
Question
5 answers
How IoT devices are vulnerable and how it posses threats to user data privacy.
Relevant answer
Answer
I think Addressing IoT security and privacy requires a multi-faceted approach:
Stronger security by design
Regular devices updates
User aware
Privacy-by-design
Data minimization
Transparent data handling
Robust encryption
Secure network connections
Regular security audits
Industry standards and regulations
By implementing these measures, we can mitigate the risks associated with IoT devices and protect user data privacy. It's essential for manufacturers, developers, and users to work together to create a secure and trustworthy IoT ecosystem.
  • asked a question related to Information Privacy
Question
4 answers
Before you answer 'yes, it is required', please understand that this question is on surveys where no personal information is collected. My colleagues have published journal papers on studies involving human participants without getting approval from the ethics committee. Those studies are typically on design practice, pedagogy and software usability.
Wherever there is a discussion on ethical approval in studies involving human participants, there is an implicit assumption that the data collected is about the participants, since most studies with human participants are conducted by researchers from medical, ergonomics or related domains. Hence, I am unable to find a brief, definitive answer specifically for this scenario.
Is ethical approval required for expert interviews and Delphi studies?
Relevant answer
Answer
When you say you've completed research "without getting approval from the ethics committee" does that mean you approached an ethics committee, explained your research, and they clarified ethical approval wasn't required (as can sometimes be the case for some topics of research).
Or was it that you and colleagues decided that because you felt no ethical approval was required you simply didn't apply for it? That would be more concerning.
If your research involves human participants and an ethics committee is available to you, then you should seek clarification from them rather than decide for yourself what requires approval or not.
Although it's always the responsibility of the researcher(s) to ensure all areas of research are ethical (which is a very broad category of considerations), it's important to check whether and what kind of ethics approval may be necessary.
It might be for some studies and data sources there is no need for ethical approval. But if you're planning expert interviews and Delphi studies then yes, I would ask for clarification and expect an application would be necessary.
If you ran surveys previously where any kind of demographic data was collected, for example, then you did collect personal information. This also applies for any future work you want to do.
It's also not just about identifiable information, it's about a whole host of other issues including researcher and participant welfare, rigour, integrity and more.
Alongside seeking clarification from your ethics committee it may be worth asking them if they offer training or enquiring if any is available at your institution. This should help clarify some of the questions you have.
  • asked a question related to Information Privacy
Question
3 answers
"In what ways might the integration of IoT confront apprehensions surrounding data confidentiality and security, specifically concerning the acquisition, retention, and conveyance of confidential data?"
Relevant answer
Answer
I will try my best to answer it in terms of Governance, Product organization, or the person tasked with integration, their or his/her commitment in terms of policies and procedures might help showcase their seriousness towards protecting the confidential data of their customers. Compliance with Industry standards and regulations is another step towards trust and faith.
Independent attestations are a piece of excellent evidence.
In terms of technical controls and security, various alternatives can be looked at, starting with access management, data anonymization, encryption, and control/data plane segregation, to name a few.
  • asked a question related to Information Privacy
Question
10 answers
While integrating manufacturing design, Industrial Revolution 4.0, and Lean Manufacturing can bring significant benefits but there are also challenges and problems that can arise in the process in term of Data Privacy and Security. How we can solve this problems?
Relevant answer
Answer
ResearchGate Link:
(PDF) Improving Lean engagement through utilising improved communication, recognition and digitalisation during the COVID-19 pandemic in JLR's powertrain machining facility (researchgate.net)
  • asked a question related to Information Privacy
Question
4 answers
An important question in the field of e-marketing is:
"How can businesses effectively leverage digital technologies and online platforms to create engaging, personalized, and seamless customer experiences that drive customer acquisition, retention, and loyalty?"
This question encompasses several key considerations in e-marketing, including:
1. Customer Engagement: How can businesses use digital channels, such as social media, email marketing, and content creation, to engage customers in meaningful and interactive ways that foster brand affinity and loyalty?
2. Personalization: What strategies and tools can businesses employ to deliver personalized and relevant marketing messages, product recommendations, and experiences tailored to individual customer preferences and behaviors?
3. Omnichannel Integration: How can businesses integrate various digital touchpoints, such as websites, mobile apps, social media, and offline channels, to provide a seamless and consistent customer experience across multiple platforms and devices?
4. Data-driven Marketing: How can businesses leverage data analytics, customer insights, and predictive modeling to optimize marketing strategies, target the right audience segments, and measure the impact of digital marketing initiatives?
5. Conversion Optimization: What tactics and techniques can be used to optimize the customer journey, from initial awareness to conversion, by refining website usability, call-to-action strategies, and conversion rate optimization techniques?
6. Customer Retention and Advocacy: How can businesses use e-marketing to nurture existing customer relationships, encourage repeat purchases, and cultivate brand advocates through loyalty programs, referral incentives, and post-purchase engagement strategies?
7. Regulatory Compliance and Data Privacy: How can businesses ensure compliance with data privacy regulations and ethical data practices while collecting, storing, and utilizing customer data for e-marketing purposes?
Addressing these questions is essential for businesses seeking to maximize the impact of their e-marketing efforts, build long-term customer relationships, and drive sustainable business growth in the digital age.
Relevant answer
Answer
Companies can effectively leverage digital technologies to create engaging, personalized, and seamless customer experiences by adopting a holistic and customer-centric approach. Firstly, investing in robust customer relationship management (CRM) systems allows companies to gather and analyze customer data, enabling them to understand preferences, behaviors, and interactions. This data serves as the foundation for personalized experiences.
Implementing artificial intelligence (AI) and machine learning (ML) algorithms can further enhance personalization by predicting customer preferences and recommending tailored products or services. Chatbots and virtual assistants powered by AI streamline customer interactions, providing instant and personalized support.
Moreover, companies should prioritize omnichannel strategies, ensuring a consistent and seamless experience across various digital touchpoints. This involves integrating online platforms, mobile apps, social media, and physical stores to create a unified customer journey.
To maintain engagement, personalized content delivery is crucial. Utilizing data-driven insights, companies can deliver targeted marketing messages, product recommendations, and promotions. This not only captures customer attention but also enhances brand loyalty.
  • asked a question related to Information Privacy
Question
4 answers
I would like to cordially invite all the young and experienced researches and scientist out there who are looking for collaborations on IOT, AI, ML , FL , blockchain techniques and data privacy, fake new detection to join my research lab. My lab is full of talented members ranging from industry experts to postDocs, undergrads to PhD scholars. It’s a common learning workspace
to support and guide all the qualitative and quantitate researches. Please check out my lab and works and share your thoughts or
feel free to connect with me for future collaborations.
Relevant answer
Answer
Please checkout my lab
  • asked a question related to Information Privacy
Question
9 answers
What are the ethical implications of cloud computing, especially in terms of data privacy and responsible AI development?
Relevant answer
Answer
  • asked a question related to Information Privacy
Question
2 answers
This question emphasizes the importance of considering the broader implications and risks of AI adoption in research. It encourages researchers to discuss the ethical, legal, and societal implications of AI, including concerns related to algorithmic bias, data privacy, security vulnerabilities, and potential unintended consequences of AI implementation.
Relevant answer
Answer
  1. Bias in Data and Models: AI systems can inherit biases present in their training data, leading to biased outcomes in research. Addressing and mitigating these biases is crucial to ensure fair and representative results.
  2. Privacy Concerns: The use of AI in research may involve analyzing sensitive or personal data. Protecting the privacy of individuals in research datasets and ensuring compliance with privacy regulations is essential.
  3. Security Vulnerabilities: AI systems can be vulnerable to adversarial attacks and data breaches. Researchers need to implement robust security measures to safeguard AI models and research data.
  4. Ethical Considerations: Researchers must navigate ethical dilemmas related to AI, such as the responsible use of AI in potentially sensitive areas like healthcare or criminal justice.
  5. Transparency and Accountability: Ensuring transparency in AI research, including disclosing methods and data sources, is crucial for the credibility and reproducibility of research findings.
  6. Human Augmentation: As AI systems become more integrated into research processes, questions arise about their potential to augment or replace human researchers, impacting employment and job roles.
  7. Algorithmic Fairness: Ensuring fairness in AI algorithms is vital to prevent discrimination in research outcomes, particularly in areas like hiring, lending, and criminal justice.
  8. Data Governance: Establishing clear data governance frameworks is essential to manage data collection, storage, and sharing, addressing potential ethical and legal challenges.
  9. Intellectual Property and Ownership: Defining ownership and intellectual property rights for AI-generated research outputs, such as content or inventions, can be complex and require legal clarity.
  10. Misuse and Dual Use: AI research can have dual-use potential, where technology developed for benign purposes may also be exploited for malicious ones. Researchers need to consider these risks.
  11. Regulatory Compliance: Adhering to evolving AI regulations and policies, both at national and international levels, is crucial to avoid legal and compliance issues.
  12. Algorithmic Accountability: Researchers should be prepared to be held accountable for the decisions and actions of AI systems they develop or deploy in research settings.
  13. Resource Allocation: The adoption of AI in research may require significant resources, and the potential for resource disparities among research institutions needs consideration.
  • asked a question related to Information Privacy
Question
9 answers
I was exploring differential privacy (DP) which is an excellent technique to preserve the privacy of the data. However, I am wondering what will be the performance metrics to prove this between schemes with DP and schemes without DP.
Are there any performance metrics in which a comparison can be made between scheme with DP and scheme without DP?
Thanks in advance.
Relevant answer
Answer
  1. Epsilon (ε): The fundamental parameter of differential privacy that quantifies the amount of privacy protection provided. Smaller values of ε indicate stronger privacy guarantees.
  2. Delta (δ): Another parameter that accounts for the probability that differential privacy might be violated. Smaller values of δ indicate lower risk of privacy breaches.
  3. Accuracy: Measures how much the output of a differentially private query deviates from the non-private query output. Lower accuracy indicates more noise added for privacy preservation.
  4. Utility: Assesses how well the data analysis task can be accomplished while maintaining differential privacy. Higher utility implies less loss of useful information.
  5. False Positive Rate: In the context of hypothesis testing, it's the probability of incorrectly identifying a sensitive individual as not being in the dataset.
  6. False Negative Rate: The probability of failing to identify a sensitive individual present in the dataset.
  7. Sensitivity: Defines the maximum impact of changing one individual's data on the query output. It influences the amount of noise introduced for privacy.
  8. Data Reconstruction Error: Measures how well an adversary can reconstruct individual data points from noisy aggregated results.
  9. Risk of Re-identification: Measures the likelihood that an attacker can associate a specific record in the released data with a real individual.
  10. Privacy Budget Depletion: Tracks how much privacy budget (ε) is consumed over multiple queries, potentially leading to eventual privacy leakage.
  11. Trade-off Between Privacy and Utility: Evaluates the balance between privacy gains and the degradation of data quality or analysis accuracy.
  12. Adversarial Attack Resistance: Assessing the effectiveness of differential privacy against adversaries attempting to violate privacy by exploiting the noise added to the data.
  • asked a question related to Information Privacy
Question
3 answers
As Homomorphic Encryption schemes like CKKS are not capable of performing non-linear functions such as comparison, other PET techniques, including MPC can provide us with a level of security for a desired machine learning application.
Currently, I'm searching for the current related works that combine CKKS and, MPC; particularly 2PC function secret sharing. Every idea would be greatly appreciated!
Relevant answer
Answer
Dear Dr. Shokofeh Vahidiansadegh ,
The following articles should be at least partially relevant:
Practical MPC+FHE with Applications in Secure Multi-Party Neural Network Evaluation (iacr.org)
_____
_____
Introduction to the CKKS encryption scheme
_____
_____
CKKS EXPLAINED, PART 3: ENCRYPTION AND DECRYPTION
Posted on September 14th, 2020 under Homomorphic Encryption
  • asked a question related to Information Privacy
Question
25 answers
I clicked on the "data & privacy policy" link and it says some gobbledigook about "To alert you to special offers", "To notify you about a material change", "To help us create and publish content most relevant to you". Any advice?
Relevant answer
Answer
What an insightful question! It's understandable to have concerns, but it's important to note that both the LGBTQ and Muslim communities have faced discrimination and prejudice across the continent. Muslim are even worse though, From Palestinian, Rohingya, Muslims of India, to Uyghurs, in Europe, in USA, in Canada, and other developed nations, Muslims are subject to Airport interrogation, severe discrimination, and public assault due to physical grooming and belief framework. Worse labelled as terrorists while zionists are killing Palestinian every single day since the Balfour Declaration of 1947. Islamophobia is a supernatural phenomenon that no single leader in the world is able to stop this Psychologically disorder. However, when it comes to the world of academia, unique ideas and perspectives are highly valued. It's all about finding the right publication and publisher who appreciate originality in research and writing. Best of luck with your submission, Doctor!
  • asked a question related to Information Privacy
Question
2 answers
What is The role of data privacy policies in digital innovation management?
Relevant answer
Answer
Data privacy policies play a crucial role in digital innovation management by balancing the need for innovation and technological advancements with the protection of individuals' privacy and personal data. Here are some key roles of data privacy policies in digital innovation management:
  1. Trust and User Confidence: Data privacy policies help build trust and confidence among users by assuring them that their personal information will be handled securely and responsibly. When individuals trust that their data will be protected, they are more likely to engage with digital innovations and share their information, enabling organizations to innovate and develop new technologies.
  2. Compliance and Legal Framework: Data privacy policies provide a legal framework that organizations must adhere to when collecting, storing, processing, and sharing personal data. Compliance with privacy regulations, such as the European Union's General Data Protection Regulation (GDPR) or similar laws in other jurisdictions, ensures that organizations manage data responsibly and respect individuals' privacy rights. This promotes ethical and accountable digital innovation practices.
  3. Risk Management: Data privacy policies help manage the risks associated with handling personal data in digital innovation. By implementing privacy safeguards, organizations can minimize the potential for data breaches, unauthorized access, or misuse of personal information. This protects both individuals and organizations from reputational damage, legal liabilities, and financial losses.
  4. Ethical Considerations: Data privacy policies integrate ethical considerations into digital innovation management. They guide organizations to adopt responsible data practices, such as obtaining informed consent, ensuring data minimization and purpose limitation, and implementing strong security measures. Ethical handling of personal data is essential for maintaining public trust and ensuring that technological advancements respect individual privacy and dignity.
  5. Innovation with Privacy by Design: Data privacy policies promote the concept of "privacy by design" in digital innovation. Privacy by Design emphasizes embedding privacy and data protection principles into the design and development of technologies from the early stages. By considering privacy implications and implementing privacy-enhancing measures proactively, organizations can foster innovative solutions that prioritize privacy and minimize potential risks.
  6. International Data Flows: Data privacy policies also address international data flows and cross-border data transfers. They establish mechanisms for transferring personal data between countries while ensuring an adequate level of protection. These policies facilitate global collaborations and support the growth of digital innovation ecosystems on an international scale.
In summary, data privacy policies provide a framework for responsible and ethical digital innovation management. They ensure the protection of individuals' privacy rights, foster trust, and enable organizations to innovate while managing risks associated with personal data handling. By integrating privacy considerations into the innovation process, organizations can build sustainable and privacy-respecting digital solutions.
  • asked a question related to Information Privacy
Question
3 answers
The field of instructional design and educational technology encompasses a wide range of technical considerations, but what do you think is the most important one? Could it be the selection and implementation of appropriate technology tools and platforms? Or data privacy and security? Any other?
Relevant answer
Answer
A very good example of something which demonstrated this inclusion approach is this article from Wired:
May 26, 2023 6:00 AM
Where Memory Ends and Generative AI Begins
New photo manipulation tools from Google and Adobe are blurring the lines between real memories and those dreamed up by AI.
  • In late March, a well-funded artificial intelligence startup hosted what it said was the first ever AI film festival at the Alamo Drafthouse theater in San Francisco. The startup, called Runway, is best known for cocreating Stable Diffusion, the standout text-to-image AI tool that captured imaginations in 2022. Then, in February of this year, Runway released a tool that could change the entire style of an existing video with just a simple prompt. Runway told budding filmmakers to have at it and later selected 10 short films to showcase at the fest.
The short films were mostly demonstrations of technology; well-constructed narratives took a backseat. Some were surreal, and in one instance intentionally macabre. The last film shown made the hair stand up on the back of my neck. It felt as though the filmmaker had deliberately misunderstood the assignment, eschewing video for still images. Called Expanded Childhood, the AI “film” was a slideshow of photos with a barely audible echo of narration.
Director Sam Lawton, a 21-year-old film student from Nebraska, later told me he used OpenAI’s DALL-E to alter the images. He assembled a series of photos from his childhood, fed them to the AI tool, and gave it various commands to expand the images: to fill in the edges with more cows, or trees; to insert people into the frame who hadn’t really been there; to reimagine what the kitchen looked like. Toss another puppy into the bathtub—why not? Lawton showed the AI-generated images to his father, recorded his befuddled reactions, and inserted the audio into the film.
“No, that’s not our house. Wow—wait a minute. That’s our house. Something’s wrong. I don’t know what that is. Do I just not rememberit?” Lawton’s father can be heard saying.
Where do real memories end and generative AI begin? It’s a question for the AI era, where our holy photos merge with holey memories, where new pixels are generated whole cloth by artificial intelligence. Over the past few weeks, tech giants Google and Adobe, whose tools collectively reach billions of fingertips, have released AI-powered editing tools that completely change the context of images, pushing the boundaries of truth, memory, and enhanced photography.
Google dipped its toes in the water with the release of Magic Eraser in 2021. Now the company is testing Magic Editor, a feature on select Android phones that repositions subjects, removes photobombers, and edits out other unseemly elements, then uses generative AI to fill in pixel gaps. Adobe, arguably the most famous maker of creative editing software, announced earlier this week that it was putting its generative AI engine Firefly into Adobe Photoshop. The aptly named Generative Fill feature will edit photos and insert new content via a text-based prompt. Type in “add some clouds” zand there they appear.
Adobe is calling it a “co-pilot” for creative workflows, which parrots the phrasing that other tech companies, such as Microsoft, are using to describe generative AI apps. It implies that you are still in total control. In this framing AI is merely offering an assist, taking over navigation when you need a bathroom break. This is something of a misportrayal when the AI is actually acting as a cartographer, redrawing the maps of your existence.
“‘Perfect your memories’ is perhaps the most haunting phrase I’ve ever read,” Signal Foundation president and former Googler Meredith Whittaker tweeted in February, in response to Google’s announcement that its Magic Eraser tool could now be used in videos, not just in photos. In its marketing of the tool, Google shows an image of a young girl facing a choppy sea. Nearer to the shoreline is a family of four, presumably not hers. Magic Eraser disappears them.
Let’s be totally clear: We could always edit photos. Whether by scissor, razor, or paint, as long as the printed photo has existed, we’ve edited. Photoshop’s provenance was timed to the rise of the personal computer, which, non-hyperbolically speaking, changed everything.
The first version of Photoshop launched in 1990. “Jennifer in Paradise” was the digital photo seen around the world: an image of Photoshop cocreator John Kroll’s wife sitting on a beach in Bora Bora. In demos, Kroll would outline his wife using the now-famous lasso tool, then clone her. He copied, pasted, and diffused an island in the distance. “A duplicate island!” Kroll said in a video posted to Adobe’s YouTube channel in 2010. An island that was not really there. A fabricated land mass.
What’s different today—what generative AI is pushing boundaries on—is the speed with which these edits can be made and who can make them. “Editing tools have existed for a long time,” says Shimrit Ben-Yair, the head of Google Photos. “And obviously we’ve been offering editing tools on Photos for a while now. As these platforms have grown their user bases, these tools become much more accessible and available to people. And edited images become more common.”
In a private demonstration of Google’s Magic Editor tool, which ships later this year, Ben-Yair pulled up yet another beach photo. This one featured two kids sporting wetsuits and boogie boards, with two adults in the distant background. The kids and adults have different skin tones, and the somewhat uncomfortable assumption in this demo—also emphasized by the distance between them—is that they are not family. Google’s Magic Editor outlined the adults in the background, then disappeared them.
In another demo, Magic Editor erased the bag strap from a woman’s shoulder as she posed in front of a waterfall, then filled in the gaps with more jacket material. Why the bag strap in a hiking photo was so bothersome, I do not know. But those aesthetic decisions are the prerogative of the photo’s creator, Google says.
Adobe’s Generative Fill is much more, well, generative. A long-haired corgi scampers down an empty road. That’s it, that’s the photo. But Generative Fill lengthens the road. It transforms barren trees into a springtime bloom. A white pickup truck appears, and whether it’s driving toward the corgi or away from it changes the tension of the photo in a notable way. But, look, now there are puddles. Surely that’s a happy photo? Generative AI is even smart enough to draft a reflection of the scampering pup in the puddles. It does this all in seconds. I’m blown away.
But after the astonishment comes “What now?” Suppose that is my hiking photo, my dog, my family on the beach. How will I remember that day if in the future they are only watercolor in my brain, and I increasingly turn to my photo roll for more vivid strokes? Did I actually not carry a bag while hiking? Did the pickup truck come dangerously close to my dog that day? Did I only ever vacation on pristine, private beaches?
Executives at both Google and Adobe say the power of the tools must be considered within the context of the photo. Who is taking it, who is sharing it, where it’s being shared to. “I think in the context of a public space, there are different expectations than that of a photo being shared in a private space,” says Ben-Yair. “If someone is sharing a photo with you via Google Photos itself or a messaging app that you use, you trust that source. And you might see the editing as something that enhances the photo, because you trust that source.”
“But the more layers of abstraction there are,” she continues, “Where you don’t know the source, then yeah, you have to think through, how authentic is this photo?”
Similarly, Andy Parsons of Adobe says there’s a “continuum of use cases” for AI-edited photos. An artist (or individual who fancies themself an artist) might use generative AI to alter a photo that’s meant to be a creative interpretation, not documentation. On the other hand, “if it’s very critically important to know that what’s being presented in the photo is a reflection of reality, such as in a news organization, we expect to see more and more photographers being required to provide transparency,” Parsons says.
Parsons is something like the king of provenance at Adobe. His actual title is senior director of the Content Authenticity Initiative, a group Adobe cocreated in 2019 to establish cross-industry guidelines around content origination and media transparency. It was the doctored Nancy Pelosi video, Parsons says, in which the Speaker of the House appeared to be slurring her words, that “again, changed history.” Even though the editing wasn’t credited to AI, the sheer manipulation of the Pelosi video made Adobe reconsider how its powerful editing tools might be used. Adobe’s earliest partners in the CAI were Twitter and The New York Times.
Then, in 2021, Adobe joined forces with the BBC, chip-makers Intel and ARM, and Microsoft to create yet another consortium for standards around “digital provenance,” called Coalition for Content Provenance and Authenticity, or C2PA. The Coalition now has more than a thousand members across various industries. At Microsoft’s annual software conference this week, the company said that its Bing Image Creator will soon use C2P2-standard cryptographic methods to sign AI-generated content. (Google’s Ben-Yair also says this is an “active area of work for the company that we’re going to explain once we get closer to the launch of it.”)
“We’re all focused on the same idea,” Parsons says. “We’ve kind of lost the arms race in detecting what may be fake. The chasm has been crossed. So the protection and countermeasure we have is knowing what model was used to capture or create an image and to make that metadata trustworthy.”
In theory, these cryptographic standards ensure that if a professional photographer snaps a photo for, say, Reuters and that photo is distributed across Reuters international news channels, both the editors commissioning the photo and the consumers viewing it would have access to a full history of provenance data. They’ll know if the cows were punched up, if police cars were removed, if someone was cropped out of the frame. Elements of photos that, according to Parsons, you’d want to be cryptographically provable and verifiable.
Of course, all of this is predicated on the notion that we—the people who look at photos—will want to, or care to, or know how to, verify the authenticity of a photo. It assumes that we are able to distinguish between social and culture and news, and that those categories are clearly defined. Transparency is great, sure; I still fell for Balenciaga Pope. The image of Pope Francis wearing a stylish jacket was first posted in the subreddit r/Midjourney as a kind of meme, spread amongst Twitter users and then picked up by news outlets reporting on the virality and implications of the AI-generated image. Art, social, news—all were equally blessed by the Pope. We now know it’s fake, but Balenciaga Pope will live forever in our brains.
After seeing Magic Editor, I tried to articulate something to Shimrit Ben-Yair without assigning a moral value to it, which is to say I prefaced my statement with, “I’m trying to not assign a moral value to this.” It is remarkable, I said, how much control of our future memories is in the hands of giant tech companies right now simply because of the tools and infrastructure that exist to record so much of our lives.
Ben-Yair paused a full five seconds before responding. “Yeah, I mean … I think people trust Google with their data to safeguard. And I see that as a very, very big responsibility for us to carry.” It was a forgettable response, but thankfully, I was recording. On a Google app.
After Adobe unveiled Generative Fill this week, I wrote to Sam Lawton, the student filmmaker behind Expanded Childhood, to ask if he planned to use it. He’s still partial to AI image generators like Midjourney and DALL-E 2, he wrote, but sees the usefulness of Adobe integrating generative AI directly into its most popular editing software.
“There’s been discourse on Twitter for a while now about how AI is going to take all graphic designer jobs, usually referencing smaller Gen AI companies that can generate logos and what not,” Lawton says. “In reality, it should be pretty obvious that a big player like Adobe would come in and give these tools straight to the designers to keep them within their ecosystem.”
As for his short film, he says the reception to it has been “interesting,” in that it has resonated with people much more than he thought it would. He’d thought the AI-distorted faces, the obvious fakeness of a few of the stills, compounded with the fact that it was rooted in his own childhood, would create a barrier to people connecting with the film. “From what I’ve been told repeatedly, though, the feeling of nostalgia, combined with the uncanny valley, has leaked through into the viewer’s own experience,” he says.
Lawton tells me he has found the process of being able to see more context around his foundational memories to be therapeutic, even when the AI-generated memory wasn’t entirely true.
  • asked a question related to Information Privacy
Question
32 answers
Currently l am thinking of perusing research computing, ideal area cybersecurity problems and IoT, so far proposed research title "security risk assessment in IoT systems: Data privacy and security" any suggestions please experts
Relevant answer
Answer
There are many current problems in cybersecurity and IoT, some of which include:
1. Device vulnerabilities: IoT devices are often designed with weak security, making them an easy target for hackers.
2. Distributed Denial-of-service (DDoS) attacks: These attacks are increasing in frequency and are becoming more sophisticated, making them difficult to prevent.
3. Lack of standards: There is currently a lack of industry-wide standards for IoT security, making it difficult for manufacturers to ensure that their devices are secure.
4. Data privacy: IoT devices often collect and store large amounts of data, putting personal and sensitive information at risk.
5. Insider threats: Employees, contractors, and other insiders can pose significant security risks to IoT systems, as they can intentionally or inadvertently cause data breaches.
6. Cloud security: Many IoT devices rely on cloud services to operate, making them vulnerable to cloud-based attacks.
To address these problems, it is important for businesses and individuals to implement strong cybersecurity measures, such as strong passwords, encryption, and regular updates. Additionally, industry-wide standards and regulations can help ensure that IoT devices are designed and manufactured with security in mind.@
  • asked a question related to Information Privacy
Question
2 answers
I am a Masters's student writing my thesis on a compliance issue in the automotive industry, mainly dealing with software development. I have linked this issue to resistance to change management as one of the reasons, apart from my other hypotheses gathered during my internship, while having casual communication with frontline employees and supervisors.
To verify or refute these hypotheses, I conducted in-person interviews utilizing a questionnaire consisting of both open-ended and closed-ended questions. The data collected encompassed qualitative responses from the open-ended questions, as well as quantitative data obtained through the use of closed-ended questions and scales. During these secondary interviews, a new hypothesis is found.
When selecting a sample for a process, Quota Sampling (using the highest number of tickets) is commonly utilized. Additionally, to choose individuals to interview who have been assigned tickets in individual processes, Purposive or Judgmental Sampling is employed, considering their availability and location.
Currently, I am in the last phase of conducting a thematic analysis of this data using an employee-driven improvement approach. As a precaution for data privacy, the organization I work for prohibits using transcription or recording tools. Therefore, I solely relied on notes taken during the interview.
I'd like to confirm whether my approach falls under Grounded Theory and how it can be classified – deductive, inductive, or a combination of both.
I wonder if it's appropriate to refer to my research approach in an empirical study as mixed methods.
Lastly, I would greatly appreciate your input on the research purpose – whether it falls under the categories of Exploratory, Descriptive, or Explanatory. Personally, I am attempting to achieve all three objectives. I am curious to hear your thoughts on this matter.
I'd be so grateful for any help you can provide. Thanks!
Relevant answer
Answer
This is certainly not grounded theory because you have not alternated data collection and data analysis in the systematic fashion required by that method. To be mixed methods research you would need to integrate the results from qualitative and quantitative methods, but it sounds like your N would be too small to produce meaningful quantitative results.
That leaves exploratory research, would be a good match to a thematic analysis.
  • asked a question related to Information Privacy
Question
6 answers
What are the priorities that need to be addressed for AI systems to be dependable and trustworthy?
Data-intensive AI systems are expanding rapidly and are affecting many aspects of our daily life. While effectiveness of AI systems is growing, their dependability and trust seems to be endangered. Redefining and dealing with issues like data security, data privacy are among the key challenges AI systems are facing. What is your opinion on that?
Relevant answer
Answer
* Big corporations driving the research to suit their market needs and revenue
* Academics funded by big corporations skewing the research directions and outcomes
* Research only focussed on publishing papers without an iota of concern for quality and ethics
In a nutshell, the issues with AI are primarily due to the declining quality of academic research at elite institutions.
  • asked a question related to Information Privacy
Question
3 answers
How viAct sticks to data privacy norms to fulfill GDPR and other privacy compliance?
Relevant answer
Answer
viAct is GDPR absolute complaint.
  • asked a question related to Information Privacy
Question
5 answers
Answers in the survey will be analyzed through a hybrid multi-criteria decision-making model, and the data privacy act will be followed. We are willing to have you as our co-authors with your participation in the research. We humbly request your assistance in answering this survey. We would also like to kindly ask that you not share any information about our research with others due to its novelty/uniqueness. We intend to publish this as a peer-reviewed scientific journal in the future.
Feel free to message me to let me know, thanks!
Relevant answer
Answer
Peter Broadhurst Thank you peter, sent you a message. Please check. Have a great day!
  • asked a question related to Information Privacy
Question
5 answers
While there are multiple researches that have been conducted to understand the importance of free will and data privacy in the context of neuromarketing, I am yet to come across a study which quantifies the issues. Would appreciate any suggestion on this, to help with my coursework. Thanks.
Relevant answer
Answer
The external influences on the work - profession, familiar life and so on.
The social conditions and relations with other people, also, I think influence the ethical issues of Neuromarketing.
  • asked a question related to Information Privacy
Question
4 answers
Dear all,
I have an epidemiological questionnaire for a project that I am currently involved. We need to use a web-based questionnaire for the patients, but the questionnaire will include personal and other confidential information about the patients. Could you suggest a web-based questionnaire tool which accepts this type of information and with high data privacy? As far as I know, SurveyMonkey and LimeSurvey are widely used in psychology research, but I am not quite sure that these fit for our purpose.
Many thanks
  • asked a question related to Information Privacy
Question
4 answers
Currently, due to the data privacy and security concerns, there are less institutes willing to share their data. What can be done by both medical and computer science domain to deal with the issue?
Relevant answer
Answer
The current state of cybersecurity in the world doesn't allow or makes it prohibitively expensive for an easy access to open-sourced medical data. The lesser of the two evils must be chosen.
  • asked a question related to Information Privacy
Question
1 answer
As Proposed by Cynthia Dwork, A Differential privacy gain a lots of attention nowadays in the field of Data Privacy.
Also, there are various version of Differential Privacy(DP), Mainly used either Local DP or Global DP.
Can anyone suggest Which DP is best/good for which Data Types??
Relevant answer
Answer
The General Data Protection Regulation (GDPR) is a robust privacy law that was created by the European Union (EU) in 2016 and became effective in 2018. ... The purpose of the GDPR is to update digital security for the citizens of the EU by giving them a higher level of control on the personal information they share online.
  • asked a question related to Information Privacy
Question
7 answers
Considering the use of facial recognition cameras on autonomous vehicles, the storage of data collected, the security of the data and access, do UK data privacy laws go far enough to protect peoples rights?
Relevant answer
Answer
Great question - given the attitude afforded by the Investigatory Powers Act in the UK, there is a huge privacy concern posed by CAVs. These are essentially mobile surveillance systems and the concern is how footage from the onboard interior and exterior facing cameras is used and whether state bodies can request access to these. Lots of really questions surrounding CAV use and development in the UK.
Very best wishes, James.
  • asked a question related to Information Privacy
Question
2 answers
xAPI is the new e-learning specification to outsmart SCORM. Any recommendation on undertaking Data privacy assessment?
Thanks!
Relevant answer
Answer
Hi.,
xAPI works by producing simple statements, consisting of a noun, verb and object. For example, Jack has completed Basic Business Math. The statement that's produced lets you know what e-learning Jack has accessed and exactly when it was completed. This information is stored in the Learning Record Store (LRS).
The main data structure used by xAPI to describe tracked experiences is called a statement.The XAPI service enables you to call any of the application XAPIs and return results which can then be used by a later step in the business process.
xAPI statement
  1. Get a library. Visit the code libraries page and download the library. ...
  2. Install the library. Follow the installation instructions to include your library in a project. ...
  3. Configure the LRS. You need to configure the library to send the statement to your LRS. ...
  4. Send the statement. Great!
  • asked a question related to Information Privacy
Question
6 answers
I have just started working on big data privacy and i found that privacy issues with unstructured data is not being focused much. therefore, i am thinking to work on it. Kindly provide any material or suggestion related to this topic.
Thanks in advance.
Relevant answer
Answer
Dear Ekhlas A. Hussein,
I would like to add a few words to this discussion on this important topic. The issue of privacy protection in Big Data analysis for unstructured data is an important and developmental topic. Due to the fact that more and more companies, enterprises, corporations, public and financial institutions use the analytics of large sets of internal data (generated in a specific economic entity or institution) and external (obtained from the Internet), use Big Data Analytics platforms, so the importance of information security and data obtained from various sources, archived and processed will increase. As part of improving the security systems of computerized Big Data Analytics platforms and the data stored on these platforms, intelligent gateway and firewall solutions are created when these platforms download data from the intranet and the Internet. In addition, the systems of institutional email applications are being improved in order to reduce the scale of successful cybercriminal attacks by cybercriminals using malware and ransomware viruses hidden in spamming.
Regards,
Dariusz Prokopowicz
  • asked a question related to Information Privacy
Question
6 answers
I was exploring federated learning algorithms and reading this paper (https://arxiv.org/pdf/1602.05629.pdf). In this paper, they have average the weights that are received from clients as attached file. In the marked part, they have considered total client samples and individual client samples. As far I have learned that federated learning has introduced to keep data on the client-side to maintain privacy. Then, how come the server will know this information? I am confused about this concept.
Any clarification?
Thanks in advance.
Relevant answer
Answer
Thanks for your input. I have their codes. They have followed the same. I have attached their code below.
  • asked a question related to Information Privacy
Question
3 answers
Can anybody share with me a dataset from smart factory that we can use for experiments related to privacy / data anonymization / data protection?
Thanks.
Relevant answer
Answer
Dear Agnes Koschmider,
Have you got any datasets? Can you share it with me?
  • asked a question related to Information Privacy
Question
3 answers
Hoping to get some insight on your idea or contribution towards the privacy of data in e-mental health services and applications.
Kindly leave a comment of thought towards data privacy especially in Europe.
This is my survey and i am collecting data (anonymous) about what you think about having your sensitive data out there.
please follow the link to my survey.
would Appraciate it
Relevant answer
Answer
Privacy is a part of a transaction security in online transactions, and it require a good mutual out of band multi-factor client - server authentication and data in transit encryption.
  • asked a question related to Information Privacy
Question
6 answers
As far as I know, Egypt is the last country which enacted the Data Protection Law on 13 July 2020. Accordingly, the total number of countries with data protection law is 143. Is there any update other than this?
Relevant answer
Answer
Thanks for this interested question. Agree with
Syed Hassan
  • asked a question related to Information Privacy
Question
8 answers
Data sharing increases utilization of data requiring protection of privacy of the data owner.
There are a number of regulations that try to make sure that data custodians ensure the privacy of data subjects. However, these regulations do not seem to hold during pandemics, not to mention data privacy dangers that arise due to limited effectiveness of privacy systems as well as algorithms' data leaks.
What are the possible dangers of data privacy during corona pandemic that you think of?
Relevant answer
Answer
A very good substantive question. During the SARS-CoV-2 (Covid-19) coronavirus pandemic, the scale of digitization and internationalization of remote communication processes, remote work, e-learning, online shopping and payments, etc., and many other aspects of communication processes, economic processes, increased, financial, social etc. The scale of using ICT, Internet and Industry 4.0 information technologies in these processes has increased. On the other hand, there are doubts whether the pace of improvement of cybercrime risk management systems, the risk of data transfer in the Internet, the risk of viruses and viruses and malware in smartphones used by users of the mobile internet banking application, etc. is keeping pace with the current progress determined by the increase in the digitization and Internetisation of communication processes. , economic, financial, social and other processes.
Best regards, Have a nice day, Stay healthy!
Dariusz Prokopowicz
  • asked a question related to Information Privacy
Question
3 answers
I am interested to know if anyone has conducted online studies with a hierarchical linear modelling design in the field of education involving teachers and their students or in a similar field involving coach/mentor/supervisor and their athletes/employees? What platforms have you used and how has it been compatible with data privacy? What kind of ethical issues have you encountered?
Relevant answer
Answer
Intresting topic
  • asked a question related to Information Privacy
Question
2 answers
I have recently started studying genomic data privacy and it seems the field is relatively new. I am looking for the existing problems. Implementing Homomorphic encryption or differential privacy has a lot going on. Can anyone suggest any other existing challenges?
Relevant answer
Answer
Given that the main routes to breach privacy are
- identity tracing,
- attribute disclosure attacks using DNA (ADAD)
- and completion of sensitive DNA information*,
the main questions would emerge from each of these distinct yet overlapping issues.
Genomic privacy though, I would argue, isn't necessarily about the academic exercise (fascinating as one might find it), but rather a more and more pressing issue, given the development of AI, as well as the various regulatory frameworks concerning data and individual privacy...
I do hope this may have been of some use, though I'm sure a lot has happened in the field since the question was posted. Best of luck with your research!
  • asked a question related to Information Privacy
Question
9 answers
How does public policy impact the use of internet and help industries invade our privacy? Does fundamental rights (such as that of freedom to express and right to personal privacy) come into play?
Relevant answer
Answer
Dear Ridha Dhawan,
In my opinion, people who use various information services available on the Internet have very different knowledge about the security of personal data posted on various social media portals and / or transferred via various instant messaging and other Internet media. High variation in this knowledge results from different experiences, and above all from possible negative experiences of loss of sensitive data on the Internet or other effects of cyber criminals' activities. People who have less knowledge in this matter are willing to post more personal data about themselves on social media portals. In a situation of greater awareness of the emerging threats in this matter, Internet users use specific actions, practices (e.g. not opening e-mails of unknown origin) and instruments (e.g. anti-virus applications, frequent updating of operating systems). Therefore, Internet users who are individual clients of social media portals and other new Internet media have varied awareness of various aspects of the risk of potential loss of personal data in the Internet, etc., however, they do not examine these risks instrumentally. On the other hand, enterprises, mainly large corporations, financial institutions, including internet banks, build and improve risk management systems for IT systems connected to the Internet, etc. I have described this issue in more detail in some of my publications available on the Research Gate portal. I invite you to cooperation.
Best regards,
Dariusz Prokopowicz
  • asked a question related to Information Privacy
Question
22 answers
The protection of private life is essential since a possible injury might be impossible to repair. The video gives a larger view of the invasion of private life caused by social media with a distinct emphasis on FB as the first mover in this business, but in the end only refers to possible financial damages. In my opinion the safety of the internet transactions is only a secondary aspect of a much bigger problem, which is protection of privacy on internet. I am asking for you opinion.
Relevant answer
Answer
Privacy and social media are in a paradoxical relationship.
  • asked a question related to Information Privacy
Question
8 answers
A need for Data Protection Officers is emerging very fast. After adoption of GDPR, organizations worldwide need hundreds of thousands of DPOs. Are universities ready, are there enough data privacy programs/courses that putts together information security and law?
Relevant answer
Answer
Agree with Ralf's views on this. Universities can look at industry linked programs in Risk and Compliance space and privacy can be covered under that.
  • asked a question related to Information Privacy
Question
6 answers
For providing Big Data privacy, its important that utility of the data/mining result should be preserved. For that before implementing anything, first we need to prove the concept/idea mathematically/theoretically.
Can anyone suggests good Papers/articles/notes on Good Mathematical foundation of Big Data Privacy preserving Models/algorithms?
Also, suggest some open issues in Big Data Privacy/ Privacy Preserving Big Data Analytics.
Hoping for your kind reply.
Relevant answer
Answer
I think using normalization techniques it possible to add.
  • asked a question related to Information Privacy
Question
6 answers
Open Access Journal that should take time to publish and related with Computer related fields like Big Data, Privacy and Security.
Relevant answer
Answer
I agree with Manjula.
  • asked a question related to Information Privacy
Question
3 answers
I am looking for case studies of actual privacy risks. At the core of privacy and data protection impact assessments, we find the concept of 'risk' meaning - in this case - the probability of a threat to personal data and the possible harm or damage caused by this threat. E.g. I fall victim to a phishing attack and the attacker gains access to my bank account, the actual harm being that my account is emptied. Another example would be that my account at a social media platform is hacked and my identity is used to "go shopping".
Now, one finds a lot of literature on privacy (PIA) and data protection impact assessments (e.g. the edited volume by Wright and De Hert (2012) on PIA), on the potential risks of low levels of data security (e.g. Rosner, Kenneally (2018): Clearly Opaque: Privacy Risks of the Internet of Things), on technological and organization standards (e.g. ISO 27001 on Information security management), or on the regulatory frameworks of privacy and data protection (e.g. everything on the details of the GDPR in the EU). But I have a hard time to find research results evaluating actual risks similar to your risk to fall victim to a traffic accident, have your home being broken into, or get cancer.
I would welcome any hint to empirical publications on actual privacy risk analysis be it from medical, social, internet-based or any other research that you consider as most important. I am *not* looking for literature on how to conduct privacy and data protection impact assessments or standards for this purpose. Thank you.
Relevant answer
Answer
This is a great question, and inspired to me to look for some quantification of the risk and probability of data breaches and harm. Found the following reports which may be of interest. They are largely from security companies and insurance companies, which would have access to this kind of data and might need data like that to set insurance policies.
  • asked a question related to Information Privacy
Question
4 answers
Hi, I am conducting a survey on Proximity marketing in retail sectors and your input would be appreciated. This questionnaire is a part of research for my dissertation and will not take more than 5 minutes to complete.
Respondent information will be kept anonymous and data privacy will be maintained as per GDPR rules. Click the link below to start the survey. Thank you !
Relevant answer
Answer
done wish you the best
  • asked a question related to Information Privacy
Question
4 answers
Since organizations and companies are one of the big sources of data, many of them are still not interested in taking advantage of it. Although there are some obstacles and barriers that make managers hesitated to apply these technologies in action, overcoming them could provide huge advantages for organizations. There are some reason such as "organizational silos", privacy and security, costs, lack of appropriately skilled people, organizational culture.
Could you please name some of new challenges you may face in your organization or you experienced before?
Thanks
Relevant answer
Answer
I think as you have mentioned there’s a lack of skilled employees and general lack of knowledge on the subject because in my opinion it’s still relatlively in its infancy. Also it’s trying to understand how to utilise it for the benefit of the company and how it affects things GDPR.
  • asked a question related to Information Privacy
Question
3 answers
How can we apply Differential Privacy to real time data for privacy preserving?
Relevant answer
Answer
The link below may clarify this thing:
  • asked a question related to Information Privacy
Question
2 answers
I am interested in case study recearch and interview with the employees of companies. I search both employees rights and privacy and the data privacy of companies and ethical standarts and permissions of companies about publications related to companies.
Could you recommend me any website, similar case or an article related to this issue?
Relevant answer
Answer
The Electronic Frontier Foundation "The leading nonprofit defending digital privacy, free speech, and innovation."
The Electronic Privacy Information Center
  • asked a question related to Information Privacy
Question
2 answers
I am a last year BCs student (Marketing) and am thinking about a topic related with data disclosure (e.g. permission marketing) in online context. Currently I am trying to look into different factors that would influence willingness to disclose personal data.
I'd like to know if there's a scale developed to measure paranoia in a digital context (digital/ online paranoia?) or any relevant paper/research, for that matter. From what I understand, paranoia has been proven to be a personal characteristic shared amost universally at different degrees, not just a severe clinical condition; would be interesting to see it in an online context.
Thank you in advance, any insight or clue, recommendation is really appreciated!
Relevant answer
Answer
You would probably find more information related to this vector of research if you looked at things associated with "risk tolerance". A highly paranoid person might be thought of as having very low tolerance for risk, while a person with a high tolerance for risk might be thought of as not being paranoid. I am over-simplifying, of course, from a psychological perspective, but since you are looking at behaviors associated with information, this terminology might be more useful to your ability to find related publications/research.
  • asked a question related to Information Privacy
Question
4 answers
I am Information system's student and I have class project and now i am seeking for ideas that i can start with. My concentration is on the (Mobile HCI, Usable security and privacy, social media)
Please help me to start my project proposal please
thank you in advance.
Relevant answer
Answer
Also maybe this paper helps you: https://www.researchgate.net/publication/323837643_A_model_for_users'_profile_recognition_based_on_their_behavior_in_online_applications?_sg=NUMFtFov-uuLhy4A8nzxo0kfndJVOTmI97d0BAwb_LJ4eumZWtCGjRT134qB7Lb6nKi00rrBIIPsTRzkmqIfW-w4gx11PnRoJLCP-KHk.GYvObA3FwTf-m99GeCQ9rqmf21H-PZb1mOlHacBweQp_DHd9fXgdZCS_ZtxxfDDEH90NVX3qEEeEkFmaA5a7Tw
  • asked a question related to Information Privacy
Question
9 answers
How do you feel about disclosing your private info on social media applications such as Facebook? Do you behave the same in your daily life, or is it something special for social media only? How much are you concerned about your privacy in general?
Relevant answer
Answer
Unfortunately, contrary to the assurances of companies that run social media portals, the information contained on these websites is not always fully secured by the activities of cybercriminals. In addition, the issue of downloading data from social media portals by large companies to Big Data database systems should be added in order to process them for marketing purposes. The issue of privacy in social media is very important and is related to the security of personal information. Privacy is at risk in terms of information posted on social media portals.
The problems of the analysis of information contained on social media portals for marketing purposes are described in the publication:
I invite you to discussion and cooperation. Greetings
  • asked a question related to Information Privacy
Question
40 answers
Can anyone recommend some good sources of annotated (labeled) datasets for network security tests and Machine Learning (ML)? In general, various cybersecurity areas are welcomed but from reliable and confident sources. Poor and incorrect annotations or malicious sources are not of interest, so avoid it please.
Relevant answer
Answer
Interesting
  • asked a question related to Information Privacy
Question
2 answers
Currently, I am working on a project (http://smile-h2020.eu/smile/) for smart mobility in the EU land borders to improve border control and management. SMILE increases the tendency to collect, use and process hard and soft biometrics data to optimize and speed up the border checks as well as monitor the flows of people at land borders. In SMILE system, first, the travellers need to pre-register their information includes enrolment of biometrics data using their mobile phone. The biometric information of the traveller is registered in a SMILE database and then becomes the only identifier of the traveller in one-to-many comparison against all biometrics database in EU. Second, upon arrival at the border control point (BCP), traveller will be identified and verified at SMILE "fast lane" using SMILE mobile devices. This is will include people who travels in groups such as family. Family with kids need to register their kids information (biometrics data) also.
I am working on legal and ethical assessment of SMILE use case scenarios and business model. My concerns now are "ethical and legal considerations in biometric data use in assessments with infants and children".
What is the best age for biometric data collection? Is age limits have been considered in other contexts? If so, what are the the age limits for biometric data collection in case of infants and children?
Thanks
Relevant answer
Answer
a lot of work has been done in this respect, see for example
colleagues in the European Union Agency for Fundamental Rights have done a lot see http://fra.europa.eu/en/publication/2018/biometrics-rights-protection
  • asked a question related to Information Privacy
Question
26 answers
I just want to clarify on data privacy as it is being addressed in research nowadays.
Is there really a need to code the research environment or locale?
Thanks.
Relevant answer
Answer
Thank all colleagues and friends
I Agree with Barbara Sawicka definitely!
  • asked a question related to Information Privacy
Question
3 answers
The GDPR seems to be more a protectionist initiative for large and rich publishers. They say that "the GDPR improves transparency and data privacy rights of individuals", but seems to be an initiative to restrict science and reduces the access to information, but is it? Please you must say what your opinion.
Relevant answer
Answer
GPDR can be considered in many aspects.
For small organizations, this will involve many new responsibilities. It will also affect honest and high standards - as there will be a need to document compliance with these standards, while it has been sufficient to comply with them so far.
Larger organizations are likely to feel less because they are more formalized and bureaucratic anyway.
There will be fear of penalties, which may limit some activities. While most probably agree that abuses and uncontrolled trade in personal data should be limited, the problem of borderline events, taken in good faith, will also appear, which, however, can also be interpreted as abuse and transgression.
Will this improve the protection of the right to privacy? It really depends on people and their awareness. The last affair with Facebook showed how easy private data can be used, but on the other hand people should be able to count on such a situation by sharing their data on the Internet. No regulation can replace reason and caution.
Will it affect learning and information flow? I do not think so much. If these regulations were in force many years ago, today we would probably know that the AIDS patient traveled a lot and was homosexual - because they influenced the way the disease spread and that it affected homosexuals in the first place, but we would not know the name this patient, which does not matter to understand the mechanism of disease spread.
I work in data recovery. This is a very sensitive area when it comes to confidentiality and data security. If someone entrusts me with his medium, he expects that the data I will recover will not be disclosed to anyone else or to a jealous wife or police - if I suspect that they may be evidence of a crime. On the other hand, if the client is, for example, the Police, it is not my role to protect someone's intimate secrets that concern legal but very personal matters.
Usually I do not analyze the contents of the media, unless I am explicitly asked for it, so I do not even know about many ethically doubtful situations. However, over the years I have seen something like this several times that I had serious doubts as to how to proceed. Always prevailed loyalty to the client. He is the owner of the data and he is responsible for their use.
  • asked a question related to Information Privacy
Question
4 answers
As i have read many literatures/papers on Privacy in big data, i got to know that various people are using different technique to achieve it. They are:
1. Anonymization Technique
2. Differential Privacy
3. Homomorphic Encryption
All of above have its own pros and cons, so kindly suggest me which is best to choose for providing privacy in Big Data Analytics?
Also where to provide privacy??? At data generation phase, data collection Phase or data analytics phase???
Relevant answer
Answer
Thank you @roger Hallman
  • asked a question related to Information Privacy
Question
13 answers
I recently had a discussion with the go-to information privacy guy at our department about one of my research projects. The project is concerned with motivating and discouraging factors in sharing genome data and we (currently) use the privacy calculus as our theoretical lens.
The result of our discussion was that he advised to refrain from using the privacy calculus since:
  1. There are already too many privacy calculus papers out there, making it somewhat the next TAM and that he had witnessed papers being rejected simply because it was yet another privacy calculus paper.
  2. People do not behave this way and do not actually engage in such a calculus.
What do other scholars think about this? Is the privacy calculus at risk of being the next TAM? Should we still use it, if it fits our research?
Relevant answer
Answer
Hi Scott,
it was pleasure meeting you at ICIS and indeed your paper is interesting.
in my view, most privacy researchers adopt/adapt some form of the privacy calculus, even if the calculus is not mentioned, what is missing however is the boundary conditions to this calculus (e.g., cognitive, motivational, situational factors, to name a few). Identifying these boundary conditions could be our way as privacy researchers to understand the complexity of privacy-related decision making.
good luck!
  • asked a question related to Information Privacy
Question
5 answers
Why to provide data privacy while doing big data analytics?
Which are vulnerabilities in big data analytics w.r.t. privacy concern?
Relevant answer
Answer
Thank you @Maleh Yassine, @Martin Henze, @Dibakar Pal, @Venkatesh Gauri Shankar
  • asked a question related to Information Privacy
Question
1 answer
FOR EXAMPLE PRIVACY ACT IN U.S. FOR HEALTHCARE  Health Insurance Portability and Accountability Act of 1996 etc..
  • asked a question related to Information Privacy
Question
2 answers
I met with the CEO of a very interesting company that provides ultrasonic scanning of piping along with a service where garnet is used to clean pipes and then a 3M product ScotchKote is applied to pipes all the way down to 3/4 to make them non corrosive. Just looking for any formal studies on this approach and it appears it could address the root of many issues from both a Municpal standpoint as well as individual buildings. 
Relevant answer
Answer
We conducted a study on acoustical methods for leak detection.  We have a conference paper published. A more comprehensive report should be out later this year.
  • asked a question related to Information Privacy
Question
3 answers
Hello,
I need the taxonomy tree for each attribute of the adult dataset. It is used on a lot of articles but I could not find it.
Anybody help me?
Relevant answer
  • asked a question related to Information Privacy
Question
3 answers
PKI guarantees that a machine is who it says it is through 3 party system?  What about processes on the same machine.  I want to validate that a signed or encrypted message came from a specific process.
Relevant answer
Answer
I think that a simple read lock should fulfill your requirement of giving merely one process access to a certificate repository. I caught a glimpse of the existing literature and found a patent about a read and write lock management at https://www.google.com/patents/US6029190. I hope it helps you.
  • asked a question related to Information Privacy
Question
10 answers
Also suggest some research papers on it.
Thank you.
Relevant answer
Answer
Hi Jalpesh
When dealing with Big Data Privacy and Security, you need to consider the fact that you must have security first, in order to ensure that you can then have privacy. If you have data which has privacy, but there is no security, then anyone can help themselves to the data, and it is simply a matter of time before the privacy element is cracked and the data is available to the attacker. Whereas, if the data is secure in the first place, then privacy simply adds another level of comfort to your already secure data.
So the first goal must always be to aim for a high level of security. This is then followed by ensuring a high level of privacy can be achieved. Next, we need to ensure we retain the means to ensure we can certify the provenance of the data. Data is useless if it has been accessed and corrupted, modified or had important elements deleted. Thus, we must achieve three goals in order to have a useful outcome.
I have attached a number of useful papers from my own research collection which cover each of these three areas to get you started. As you can see, there is little work on Big Data Provenance, and the work of Thomas Pasquier is something you should explore further.
If you consider the source of data, data which comes from corporate sources, where the corporate is ISO 27002 compliant, it is likely to be a reasonable source of data. For cloud sourced data, while there are some standards now coming out, there is currently no complete cloud security standard, thus this data is likely to be of a lesser standard. Once we move to Internet of Things data, we are moving into 'Wild West' territory. Anybody and their dog can easily hack in to IoT systems, meaning the level of trust in this data has to be considerably discounted.
If you want to find a big data area that can provide you with an exptreme challenge, IoT is the pace to go. Of course, this means that until cloud big data security and privacy are solved, your big data IoT challenge will be an impossible goal to achieve.
Ultimately, the choice is yours, but we need to focus on resolving problems in a logical way, so you may want to address the cloud big data challenge first, before moving on to IoT big data. You could, of course, focus on non-cloud big data, but given the ease with which cloud enables the creation of big data, the cloud route may be the better choice.
I hope this helps.
Regards
Bob
enc
  • asked a question related to Information Privacy
Question
6 answers
Biometric IDs and the Risks Involved
India has generated 111,15,84,242 Aadhaar IDs as of Feb 1, 2017, as per the Authority (UIDAI) Website https://uidai.gov.in/new/ Each ID is linked to a photograph, ten fingerprints and two iris scans of the person involved. There are major research questions related to these cards that computer scientists and others should study.
What are the safeguards necessary for allowing banks, insurance companies, cell phone companies and others to access an individual’s biometrics for identification or other purposes?
I became acutely aware of the risks involved yesterday when a cell phone company tried to persuade my wife to let them access her fingerprint for comparison with the stored fingerprint associated with her Aadhaar number. Visit
“Your fingerprint is not your own! Meaning of privacy in India!"
in the article addressed by the link below. 
Relevant answer
Answer
The customers have unique personally identifiable information (PII) represented in biometric datasets. The PII entails privacy, trust, and security at all times. Cybercriminals seek PII for ransomware and illegal financial gains. To secure-biometric information is to employ secure server for effective authentications of transactions. Stolen biometric datasets could be used for impersonation to evade safeguards. Therefore, measures such as use of datasets for intended purposes only, destruction upon completion, access by trusted personnel only, closely supervised settings, extended access control, safeguarding the private key, and strong passwords (ePassports contents),
  • asked a question related to Information Privacy
Question
4 answers
Hello Folks, Is there anyone who has used Child Exploitation Hash Dataset for cyber crime-related simulation and will be able to share or has a lead as to how I can get it for research purposes only?
Ayodeji
Relevant answer
Answer
Ayodeji,
I am not sure if this will really be much of an assist, but this is probably the best you will be able to do when dealing with anything child related as it is very challenging to procure hash sets for anything child related unless you are law enforcement or LE sponsored to perform forensics etc.  You may want to consider speaking with someone at your local ICAC or NCMEC.
You can use the NSRL datasets found at the included link for non-child related hash sets for research projects.
You can use Autopsy to perform some of that research.
Good Luck.
  • asked a question related to Information Privacy
Question
8 answers
Email users want their email messages or email clients to provide high level of privacy as well as good security.
Relevant answer
Answer
Thanks.
The concept of privacy and security is such a vast and could be applied in several fields.
  • asked a question related to Information Privacy
Question
11 answers
Threat classifications and models have been proposed for classifying threats related to information systems. For example,
Islam, T., Manivannan, D., & Zeadally, S. (2016). A Classification and Characterization of Security Threats in Cloud Computing. INTERNATIONAL JOURNAL OF NEXT-GENERATION COMPUTING, 7(1).
Cyber-physical security for smart cars: taxonomy of vulnerabilities, threats, and attacks
Jouini, M., Rabai, L. B. A., & Aissa, A. B. (2014). Classification of security threats in information systems. Procedia Computer Science, 32, 489-496.
Threats Classification: State of the Art
However, none of the them talks about classification of threats related to individuals such as home internet users or students. 
My question is can the threat classification proposed for information systems can be used as it is for individuals?
I personally believe it shouldn't be used as human and information systems are different.
Relevant answer
Answer
I agree to explanations provided by Henrique Santos and Jean Damascene Twizeyimana. Info system encompasses users so, models developed for Info System Security applicable to humans also. and yes, if as a human being I will not take the responsibility of safe keeping of my gadgets (s.a, my cards, smart phones, physical keys, laptop etc) or will not  keep certain things private to myself  (e.g., my PINs, Password, OTPs etc) then I doubt anything can provide me security.
  • asked a question related to Information Privacy
Question
3 answers
When working with collaborators in Switzerland it became obvious that the government shot itself in the foot when regulating the use and transfer of patient data. Even some of the simple anonymized statistical data sets require permission from the ethics committee.
These country-specific regulations also serve as a barrier to entry when asking someone for a data set from a published paper in order to replicate their experiments to see if these experiments are reproducible. 
In your experience, what are the best countries (including Asia), where data transactions for research purposes are not regulated and most fluid? 
Relevant answer
Answer
Here is an article discussing the privacy laws between the EU and the US: 
As for the determination of the best country, it will depend on your preset determining factors.
  • asked a question related to Information Privacy
Question
7 answers
Hello,
I am trying to find different case studies that could be used for learnng different information security and privacy (IS&P) concepts, issues, approaches to deal with different IS&P threats. Although I am more interested in case studies focused on human element in information security, however, case studies focusing other aspects of IS&P are also of interest.
In some cases, "scenarios" and "case-based learning" are also used for this kind of teaching or learning.
I myself am searching but if someone already is aware of some resources, it will be really helpful.
Thanks,
Ali
Relevant answer
Answer
Human factor is considered as the weakest link in defending systems' security and privacy. The bulk of known attacks lay in the area of social engineering attacks. Phishing attacks are wide spread in on- and off-line communications (for instance, emailing and postal services).
These bold/italic marked terms above can be used to see their exact meaning in the Wikipedia, and applied with any search engine to collect enormous amount of information relating to the actual analyses of vulnerabilities, threats and their impact on privacy and security. 
  • asked a question related to Information Privacy
Question
5 answers
What is the difference between p-sensitive k-anonymity and l-diversity?
Relevant answer
Answer
You have other variation such as t-closeness
l-diversity guarantees that for a group there is at least l different sensitive attribute values (or combination), in addition to the basic k-anonymity algorithm.
p-sensitivity relates to location query, but could probably be generalised to a larger set of problem.
What do you want to do?
  • asked a question related to Information Privacy
Question
3 answers
Hi,
I need a privacy meter to secure personal data. For this I need to find threshold value,so that I can find all the entities which are below threshold are safe. Other entities which are above threshold values are unsafe.
Can anyone provide some formulas and methods to find how to mitigate privacy risk for a large amount of personal data.
Relevant answer
Answer
Jeff, it was indeed a valuable response and the suggested tool will be great help towards implementation of one of the ideas I have in mind. Thank you so much. Thanks kskyani for asking the question as well :-)
  • asked a question related to Information Privacy
Question
3 answers
Research question: "How aware are high educated parents of their private data in their family household?"
-Target group: Families with children range 7-12 years with high educated parents in any age.
-Subject: Awareness of shared private data in personal surroundings in and around the home.
Relevant answer
Answer
They have been working with families with children, researching their media usage and behavior.
  • asked a question related to Information Privacy
Question
8 answers
I'm looking for work on childrens' views of privacy. At what age do they become aware of the importance of privacy regarding their personal information. are there recommendations from psychologists on the need to protect childrens privacy?
Relevant answer
Answer
I have quickly scanned this recent study about privacy perception among teenagers. 
  • asked a question related to Information Privacy
Question
4 answers
Is there any research done that increasing global information privacy concerns are impacting the telemetry sharing among organizations and  impacting the over all threat intelligence ? 
Relevant answer
Answer
@shuaibu , any references ? 
  • asked a question related to Information Privacy
Question
5 answers
Data blocking is becoming recognised as a problem in data analytics in healthcare and has been known to be a problem in health research for a long time. Although data breaches are not uncommon, has anyone measured the resultant harm to patients from these eg how many have actually suffered identity theft or embarrassment from 
Relevant answer
Answer
Thanks Tobias and Gottfried. I do mean concrete harms such as medical identity theft, work or insurance prejudices, significant embarrassment or personal disruption.
By data blocking I mean: information blocking, “where persons or entities knowingly and unreasonably interfere with the exchange or use of electronic health information.”
The ONC report Below mostly refers to vendor and financial reasons for promulgating data silos, but I am also interested in other more local reasons such as control and rivalry and misapplication or over-application of privacy protecting legislation particularly at the health service level.
Yes, paper records are also susceptible to breach and blocking
  • asked a question related to Information Privacy
Question
1 answer
From your finding in this work, do you think that the security policy should be tailor made based on HCO? I mean that rather than enforcing the policy by considering the role of the employers, we include the rules that they can learn.