Science topic
Data Protection - Science topic
Explore the latest questions and answers in Data Protection, and find Data Protection experts.
Questions related to Data Protection
Courts are notoriously overloaded. Added to this are long breaks during ongoing proceedings, which require repeated re-reading of the case. Furthermore, while judges are formally independent and external interference is hardly verifiable (although in the case of the Federal Constitutional Court, many now doubt this), even Don Corleone's blackmail was not verifiable – he only made offers to people they couldn't refuse. Thus, there are a number of factors that can lead a court to take a shortcut to reaching a verdict.
The problem here is the sheer power with which judges are ultimately endowed. While there are legal remedies for appeal and revision, this does not necessarily mean that the next instance will deal with the case more responsibly, nor can all defendants afford to take legal action. How can we better control the arbitrariness that is often suspected, especially in political matters (as suggested in this case by an expert opinion-like article: https://www.achgut.com/artikel/Gefaehrden_Karikaturen_den_oeffentlichen_frieden)?
AI may offer a way forward. The article in question sets out the aspects that the courts should have considered. Since the article was written by a judge, it can be assumed that he described the framework correctly. If an AI were trained to evaluate cases and compare them with the judgments, it would likely reveal that certain aspects were not taken into account in the judgment. The AI is not intended to reach a judgment itself, but rather to analyze the completeness of the evidence.
If the AI evaluation is made public, the court would at least be required to explain why it reached certain conclusions that only partially harmonize with the current one, or would have to face unpleasant criticism in an appeal/review (the opposite is also true, of course: if the appellate/review court wants to view certain cases differently, it would also have to provide more detailed reasons than is currently the case). At the very least, politically motivated judgments would be significantly more difficult to enforce, and presumably, the burden on appeal and review courts would even be reduced, because a lack of due diligence at the lower level would hardly be an effective justification.
How would such an AI be trained? Of course, such training should not end up in the hands of anyone with any ties to politics, as this would open the door to bias. Two possibilities are available:
(1) Training based on judgments in completed proceedings, taking all instances into account. During testing, the AI should already notice that the above-mentioned judgment differs from other judgments because it does not take certain procedural aspects into account. Since judgments are always public, there would be no data protection issues.
Perhaps somewhat more specific, but also conceivable, would be the inclusion of case files, although this would be assessed more strictly from a data protection perspective. However, this would also reveal sloppy work by the investigating authorities in certain cases.
(2) Training based on "case reports." For this purpose, universities can prepare exam papers that analyze cases according to all legal rules, specifically highlighting procedural omissions, and are evaluated by professors for completeness and accuracy. Such reports could be prepared independently of time constraints and other pressures, would not represent a significant cost factor, and could potentially have a positive impact on future careers as judges or prosecutors.
In principle, such approaches already exist, although they are often met with critical comments that an AI could not ethically compete with or replace humans. However, that is not what it is intended to do, and there are different opinions about the ethical standards of some state lawyers. If such tools exist, we should consider using them sensibly instead of rejecting them outright.
Should Big Data Analytics be used more for personalising services or improving cybersecurity systems?
Currently, it is assumed that Big Data Analytics is a key tool for both personalising services and strengthening cybersecurity. The dilemma is which of these areas to invest more resources in and what the consequences of these decisions may be.
Companies and institutions face the challenge of choosing a strategy for using big data analysis. Personalisation allows for the creation of more attractive products and services, which leads to an increase in sales and customer satisfaction. On the other hand, investments in cybersecurity are crucial in the face of the growing number of cyberattacks and threats to users' privacy. The challenge is to find a balance between the benefits of better personalisation and the need to ensure data protection. In a world of growing digital threats, organisations must decide whether to invest more in protection against cyberattacks or rather in the development of tools to better tailor products to customer expectations.
In view of this, personalising services through Big Data brings greater business benefits than using it in the area of cybersecurity. Big Data should be used primarily to improve cybersecurity, as this is a fundamental prerequisite for the development of the digital economy. Therefore, the optimal approach requires the simultaneous development of both areas, but with a priority depending on the specifics of the industry.
The issue of the role of information, information security, including business information transferred via social media, and the application of Industry 4.0/5.0 technologies to improve systems for the transfer and processing of data and information in social media is described in the following articles:
THE QUESTION OF THE SECURITY OF FACILITATING, COLLECTING AND PROCESSING INFORMATION IN DATA BASES OF SOCIAL NETWORKING
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANISATION
The role of Big Data and Data Science in the context of information security and cybersecurity
Cybersecurity of Business Intelligence Analytics Based on the Processing of Large Sets of Information with the Use of Sentiment Analysis and Big Data
And what is your opinion on this topic?
What is your opinion on this matter?
Please answer,
I invite everyone to the discussion,
Thank you very much,
Best wishes,
I invite you to scientific cooperation,
Dariusz Prokopowicz

Hi,
I have been worked with Healthcare customers for several years and have experience over the typical data protection and privacy challenges.
Combining IAM with AI / ML models, we can create robust health security system. Please check this research and let me know your thoughts...
I want to implement Chatgpt into my website, I wonder about the risks. I know the benefits of putting it into for helping me out with some processes. Is there any recommendations about the right setup? What key priorities would you say for data protection?
All ideas are welcome!
I’m exploring AI-powered tools to analyze both qualitative (text, interviews) and quantitative (statistical, numerical) data. I’m particularly looking for tools that are user-friendly, efficient, and safe for academic research, ensuring data privacy and security. Recommendations for platforms or software that prioritize data protection would be greatly appreciated.
Hi everyone,
For a qualitative research study, we are scoping for potential transcription software to transcribe our interviews. We have the following criteria in mind:
- We are looking for affordable software (maximum cost of $200 for 16 hours/960 minutes of transcripts).
- Software should be good at transcribing East-African accents in English (specifically, Ugandan accents).
- Software should have high data protection mechanisms in place. At minimum, it should be compliant with GDPR legislation.
I already came across Otter.ai, Trint, Sonix.ai, and Rev.com. I am wondering if you have used any of this software before and can provide feedback? Other suggestions for software that meets the aforementioned criteria are also welcome.
Thank you in advance for your responses!
I am writing to request assistance in obtaining the contact information of a researcher whose profile is listed on ResearchGate. The researcher in question is Abhinav Gupta is waiting for your full-text, and their profile URL.
As part of my ongoing research project on [Review on Design and Analytical Model of Thermoelectric Generator], I am very keen to discuss potential collaboration opportunities and exchange insights with [Abhinav Gupta]. Having access to their contact details would greatly facilitate this process.
I understand the importance of privacy and data protection and assure you that any contact information provided will be used solely for the purpose of academic collaboration and will not be shared with third parties.
Thank you very much for your assistance. I look forward to your positive response.
I am conducting research as part of my academic work, focusing on legal awareness in cyberspace.
Throughout this process, I have appreciated the importance of diverse perspectives that enrich the dialogue on these issues. Your expertise and insights would be invaluable in understanding the complexities of this issue from different perspectives.
I’m sharing the link to the survey. I appreciate your consideration and participation in this project. I am available to discuss any questions or provide more information about the research.
challenges affecting adherence as per data protection policies is concerned
What measures should be implemented when utilizing artificial intelligence within institutional educational frameworks to ensure the protection of minors' personal data? Moreover, how should these measures be aligned with broader regulatory standards such as the European General Data Protection Regulation (GDPR)?
Hi Fellow Researchers,
I, Vinden Wylde, would greatly appreciate your participation and/or feedback in an anonymous Data Protection questionnaire. Your insights in particular (like-minded individuals) are of crucial value in validating my data protection framework "(VDaaS): A Novel SLEPT Data Protection Framework", and ultimately for the successful completion of a my PhD project/Thesis.
Questionnaire Link: https://cardiffmet.eu.qualtrics.com/jfe/form/SV_bOCp24vP4mTR8aO
The questionnaire should take approximately 20 minutes, and I understand the value of your time in contributing to this important project.
Benefits:
- Gain awareness of current data protection practices and upcoming legislation.
- Provide framework guidance in the Technical Application of Data Protection by Default and Design (GDPR: Article 25).
- Enable greater customisable control over data transmission and collection.
- Foster a better overall data protection culture.
Thank you very much in advance for your valuable contributions.
N.B. If there are any other avenues that you deem appropriate for this questionnaire, then please let me know.
Vinden Wylde.
Context/Rationale behind questionnaire: Why is this important?
Regardless of age group, social status, purchasing behaviour, transaction methods (e.g., post-pandemic cashless society trends), intrinsic data protection levels, or external frameworks like the General Data Protection Regulations (GDPR), Digital Footprints (DF) are traceable and globally shared. This often leads to unsolicited approaches, potentially exposing personal information without consent to unwarranted users and adversaries.
For instance, our active DF—manifested through social media, emails, blogs, 'likes,' and published photos—reveal our activities. Conversely, passive DF consists of unintentionally left data recorded by cookies during website visits, including search history and viewed content. This information is frequently leveraged to enhance products and services, serving as a tool to track and analyse the behaviours and demands of online users.
Given the limited control individuals often have over their DF, these digital identities can be exploited for revenue generation without their knowledge. This exploitation encompasses targeted advertising, the use of cookies, and implications for browsing and shopping industries. Moreover, DF visibility by potential employers, schools, creditors, etc., can have enduring effects on reputations, relationships, and employment opportunities.
Artificial intelligence augmented governance of healthcare and ethical issues need attention to ensure data protection
HI,
I'm a master degree student intresested in Access Control Management and Data Protection. Any good topic suggestion and material will be apprciaated.
Here are some key points for a discussion forum
1. Data Protection and Privacy
2. Preventing Unauthorized Access
3. Protection Against Cyber Attacks
The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, fall under the Information Technology Act, 2000, and govern the collection, use, storage, and sharing of sensitive personal data or information (SPDI) in India. These rules protect individuals' privacy and regulate how organizations handle sensitive personal information.
According to the rules, any unauthorized sharing, disclosure, or misuse of sensitive personal data or information can have legal consequences. The punishment for sharing clinical data, private data, or personal details in violation of the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, can include:
- Civil Liabilities: Individuals or organizations found guilty of violating the rules may be subject to civil liabilities, including payment of damages or compensation to the affected parties.
- Criminal Liabilities: In serious breaches, criminal liabilities may be imposed on those responsible. This can include imprisonment or fines, depending on the severity of the offence.
It's important to note that penalties and consequences may vary depending on the nature and extent of the data breach and any applicable laws or regulations related to data protection and privacy.
To ensure compliance with the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, organizations and individuals handling sensitive personal data or clinical information should implement adequate security measures, obtain explicit consent from data subjects for data sharing, and adhere to the principles of data protection and confidentiality.
How do I report the authors of a journal paper that I have been communicating with for over 3 months to grant me access to the dataset used in their published paper. I have sent them my CV, proposal report, filled several forms and signed agreements with them only to get this message at the end.
Dear XXXXX,
Thank you for your interest in using data from our institution.
Unfortunately due to data protection issues the data cannot be shared.
We apologize for this inconvenience and wish you all the best with your research/publication.
It was stated in their paper that the corresponding author should be contacted for dataset availability
Is there any platform that can be secured and compliant with relevant privacy laws and regulations, such as the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA)?
The impact of digital health technologies on healthcare system is becoming relevant day in day out and its adoption is making access to to patient care more easily accessible.
Introduction of deep learning into the security of EHR data is one of the areas that is being looked into, to ascertain better protection of these sensitive personal identifiable data.
Your input will help a lot to ensure great achievement of this project.
As an important factor of production, data is actively protected by criminal law in most countries, such as Germany and the United States.
But where is the boundary of criminal law protection?
How can protection be achieved for the entire data life cycle (fetch, process, expose, store, and destroy)?
What are the differences between criminal law obligations undertaken by different subjects (platforms, countries and individuals)?
please give me your answer
I obtained a dataset from an official body. The data is tabulated so in the cells there are actual case numbers. When the numbers are lower than three but higher than zero they are intentionally removed as a result of data protection. I wanted to run an imputation model to impute those numbers in SPSS. However the model impute values lower than zero (even minus 56). When I put the options like one for the lowest value and two for the highest value the process ends with empty cells. Does anyone recommend anything to overcome this obstacle?
Kind regards.
My name is Laura Lomax and I am an undergraduate student at the University of Bolton completing my 3rd-year project, under the supervision of Professor Jerome Carson. My study is examining whether adverse childhood experiences (ACES), affect an individual’s level of flourishing in adulthood.
Your participation should take approximately 10-15 minutes and will require minimal demographic information such as age, gender, and country. There are two questionnaires to complete, with eight short questions at the end. The first questionnaire consists of 10 questions regarding Adverse Childhood Experiences and the second questionnaire has a total of 23 questions that all relate to one’s flourishing.
Participation in the study is voluntary and you are free to withdraw from the study at any time before the last question. Completing the survey is giving your consent to participate. Once you have completed the questionnaire and pressed ‘submit’, you will no longer be able to withdraw from the study. Please note that all responses to this study are completely anonymous and your identity will remain unknown throughout. Once submitted, all data will be stored in line with the General Data Protection Regulation. The only people with access to the results are the researcher and her supervisor.
This study has been approved by the Psychology Department’s ethical committee, which adheres to the British Psychological Society’s guidelines. It is not my intention to cause you any psychological distress, however, some questions are of a sensitive nature that you may find distressing. If this is the case, or if you are interested in receiving any support about any issues raised in this study, please contact the following helplines:
Samaritans (UK & Ireland) – Call 116 123
CALM – 0800 58 58 58
Thank you for taking the time to complete the study. Should you feel you require any additional information concerning this study either before completing it or afterwards, please do not hesitate to contact me via email at ll6eps@bolton.ac.uk My supervisor’s details are J.Carson@bolton.ac.uk
Warm Regards
Laura
Link to study: https://forms.gle/bTznoPJvyxj6GyMXA
The EU General Data Protection Regulation (GDPR) is among the world’s toughest data protection laws. Under the GDPR, the EU’s data protection authorities can impose fines of up to up to €20 million (roughly $2,372,000), or 4 percent of worldwide turnover for the preceding financial year—whichever is higher.
Since the GDPR took effect in May 2018, we’ve seen over 800 fines issued across the European Economic Area (EEA) and the U.K. Enforcement started off somewhat slow. But between July 18, 2020, and July 18, 2021, there was a significant increase in the size and quantity of fines, with total penalties surging by around 113.5%. And that was before the record-breaking fine against Amazon—announced by the company in its July 30 earnings report—which dwarfed the cumulative total of all GDPR fines up until that date.
Top 10 fines so far:
- Amazon — €746 million
- Google – €50 million
- H&M — €35 million
- TIM – €27.8 million
- British Airways – €22 million
- Marriott – €20.4 million
- Wind — €17 million
- Vodafone Italia — €12.3 million
- Notebooksbilliger.de — €10.4 million
- Eni — €8.5 million
More details: https://www.tessian.com/blog/biggest-gdpr-fines-2020/
Hello. I am conducting a psychological research study with participants in the UK, and I want to know what privacy and data protection rules there are about recording participants' microphone audio and camera video for research purposes. We have a video-conferencing simulation with a virtual human (recorded actor) and would like to record short clips of participants' audio and video during the video interaction. Is this possible, and, if so, what types of informed consent and data protection need to be provided?
Thank you so much.
Given the privacy and data protection concern, is there a good place to get anonymous runners training data for their previous marathon for academic research purpose? What are that proper ways to ask companies, such as Garmin, Strava etc. for a dataset for research purpose?
Data Protection and Privacy- Comparative perspective
Even though the idea of a "pay with money not with data" principle is not new in the literature, so far this idea has not translated into domestic legislation. Why? Are there hurdles or obstacles to it? if so, of what nature (legal, political, financial)? If such principle were to be implemented, what should be the features of a corresponding legal provision under domestic law? Regarding the international regulatory framework of data protection, you are welcome to look at my last research on the matter at :
The compatibility of DLT-based applications with the GDPR has been reviewed in the past years, but the conclusions were in general not very sharp. Often, scholars underscored the fact that compatibility or lack thereof can only be assessed on a case-by-case basis. This is at least the conclusion I drew in my article on the matter, available at :
.Yet, I wonder if, with the recent developments in technology and applications, and with the better understanding of how the GDPR is implemented, time has come for a renewed assessment of the relationship between the two. Are there ways to make DLT applications a priori GDPR compatible? if so, how? Or, to the contrary, are DLT a priori not meeting the GDPR requirements? and if so why and what should be fixed when it comes to concrete use cases?
Many thanks for a lively discussion.
Christian Pauletto
Hello everyone, right now I am working in the very initial phase of a dissertation proposal revolving around the Article 16 and Article 17 of the EU GDPR, and compliance of a Blockchain model to it.
Over 10000 people from 84 countries have already participated. Survey available in 24+ languages. In order to join, please visit: https://www.yashchawla.in/corona-virus
Description: Researchers, policymakers and societies alike are increasingly discussing the New Corona Virus (COVID-19) situation, globally and in their respective countries of residence. The European Commission, national governments and private foundations invest large amounts of money for research, focusing on finding the potential cure for the virus.
In line with these developments, our international research consortium is conducting a research survey to better understand public awareness, opinions of COVID-19, and the role of various communication channels in the propagation of myths and facts. We invite you to participate in this anonymous survey, as learning about your opinions will help us to provide recommendations to both, research institutions and policymakers, regarding the process of effective communication with society. The results of this survey will be published in international academic outlets, with no identifiers to individual respondents. The data collection procedures are in-line with the General Data Protection Regulations (GDPR).
A need for Data Protection Officers is emerging very fast. After adoption of GDPR, organizations worldwide need hundreds of thousands of DPOs. Are universities ready, are there enough data privacy programs/courses that putts together information security and law?
Do you think Data Protection Officer should be a lawyer or an infosec expert? Since it is very hard to get 2 in 1 in one person, do you thing that DPO should be a team of at least two people?
GDPR says:
The data protection officer shall be designated on the basis of professional qualities and, in particular, expert knowledge of data protection law and practices and the ability to fulfil the tasks referred to in Article 39.
Article 39
Tasks of the data protection officer
1. The data protection officer shall have at least the following tasks:
(a) to inform and advise the controller or the processor and the employees who carry out processing of their obligations pursuant to this Regulation and to other Union or Member State data protection provisions;
(b) to monitor compliance with this Regulation, with other Union or Member State data protection provisions and with the policies of the controller or processor in relation to the protection of personal data, including the assignment of responsibilities, awareness-raising and training of staff involved in processing operations, and the related audits;
(c) to provide advice where requested as regards the data protection impact assessment and monitor its performance pursuant to Article 35;
(d) to cooperate with the supervisory authority;
(e) to act as the contact point for the supervisory authority on issues relating to processing, including the prior consultation referred to in Article 36, and to consult, where appropriate, with regard to any other matter.
2. The data protection officer shall in the performance of his or her tasks have due regard to the risk associated with processing operations, taking into account the nature, scope, context and purposes of processing.
I am looking for case studies of actual privacy risks. At the core of privacy and data protection impact assessments, we find the concept of 'risk' meaning - in this case - the probability of a threat to personal data and the possible harm or damage caused by this threat. E.g. I fall victim to a phishing attack and the attacker gains access to my bank account, the actual harm being that my account is emptied. Another example would be that my account at a social media platform is hacked and my identity is used to "go shopping".
Now, one finds a lot of literature on privacy (PIA) and data protection impact assessments (e.g. the edited volume by Wright and De Hert (2012) on PIA), on the potential risks of low levels of data security (e.g. Rosner, Kenneally (2018): Clearly Opaque: Privacy Risks of the Internet of Things), on technological and organization standards (e.g. ISO 27001 on Information security management), or on the regulatory frameworks of privacy and data protection (e.g. everything on the details of the GDPR in the EU). But I have a hard time to find research results evaluating actual risks similar to your risk to fall victim to a traffic accident, have your home being broken into, or get cancer.
I would welcome any hint to empirical publications on actual privacy risk analysis be it from medical, social, internet-based or any other research that you consider as most important. I am *not* looking for literature on how to conduct privacy and data protection impact assessments or standards for this purpose. Thank you.
Has the the the GDPR effectuated the application and enforcement of data protection controls in the non-European countries? How has it impacted on your own country's legal system (if applicable)?
Hi,
I am participating in a scientific study. Our general aim is to collect financial information about the amount of the funds that natural protected areas are using (and a breakdown of the spending of those funds), and what is the estimated budget needed for the PA system to achieve its objectives (the ideal budget). At the country level, or broken down in individual protected areas.
Apart from that, we have more specific questions, like how much would it cost to create new protected area, and how much is the positive economic value of the protected area, taking into account ecosystem services and visitor-based income.
📷
If you were so kind as to help me, I would be very grateful.
Many thanks and best regards
The General Data Protection Regulation (GDPR) has been in force since May 2018 and thus for almost a year. Do you know of cases in which fines were imposed for violating the requirements of the GDPR? How high were these fines and which companies were affected? Background of the question are the fears of the fines at that time (up to 20 million Euro or in the case of a company up to 4% of the total worldwide annual turnover of the previous business year). To what extent were these fears justified?
Implementation of GDPR will reduce the access to personal data by the corporations in several aspects. The effects will have several pros and cons for the economy and business. What are the prospects that GDPR will effect?
I was wondering what the feelings were (esp for those in the EU) about the new General Data Protection Regulation (GDPR) that comes online in 4 months...
In my opinion, information contained in posts, posts, comments on social media portals can be used as research material, which can be used for scientific research if appropriate standards of ethics and personal data protection are maintained.
In view of the above, I am asking you: Does the information contained in posts, posts, comments on social media portals can be used for scientific research?
Please reply.
I invite you to the discussion
Thank you very much
Best wishes
Dear Friends and Colleagues of RG
The problems of the analysis of information contained on social media portals for marketing purposes are described in the publication:
I invite you to discussion and cooperation.
Best wishes

The ongoing saga of improper access to personal medical data continues despite the multitude of computing standards for data protection.
Could Blockchain ensure the security of patient data while allowing access to appropriate healthcare staff?
Dear colleagues,
I will be really thankful if you share your experience , research, pre-prints on the impact of GDPR on planning and executing the surveys. How you achieve the consent for data protection? Whaat is the respondent reaction? What is the response rate after GDPR? We expect some decrease, for now more visible in online surveys than in face-to-face.
The GDPR seems to be more a protectionist initiative for large and rich publishers. They say that "the GDPR improves transparency and data privacy rights of individuals", but seems to be an initiative to restrict science and reduces the access to information, but is it? Please you must say what your opinion.
I am currently working on a catalogue of good practices in the sphere of privacy and data protection learning and this information will help me decide if to include your project in the catalogue.
Thank you in advance,
Dilyana
The use of big data to target groups with advertising is a mild form of manipulation but considering all the things that it could be used for, good and bad, how can a set of ethics be maintained until there is some way of policing the internet. There is a very fine line between using big data to protect people to using it to control people.
I am wondering particularly in light of the new general data protection regulation coming into force across Europe shortly. We found data protection presented a significant challenge in our long term cohort follow up study.
Today, in the UK, the Department of Digital, Culture, Media and Sport (DCMS) announced that organisations could face a fine of 4% of global turnover or £17 million for the failure of critical infrastructure, including within energy, water, transport, and health. Overall it is part of the UK's response to an EU directive on Network and Information Systems (NIS), and levies the same levels of fines of GDRP (which focuses on data protection).
This also comes on the back of recent power-related outages at BA and Capita, which led to serious problems their systems. A key focus is that organisations will be required to prove that they have a disaster recovery plan in place, and have plans to enact it on serious incidents.
But, will fines actually improve things or will auditors and the legal industry be rubbing their hands with the increasing fees for the work?
When working with collaborators in Switzerland it became obvious that the government shot itself in the foot when regulating the use and transfer of patient data. Even some of the simple anonymized statistical data sets require permission from the ethics committee.
These country-specific regulations also serve as a barrier to entry when asking someone for a data set from a published paper in order to replicate their experiments to see if these experiments are reproducible.
In your experience, what are the best countries (including Asia), where data transactions for research purposes are not regulated and most fluid?
I am doing reseach on electronic solutions to substitute the Excel/Word checklist to facilitate the Inspectors' tasks and then a possible second phase to support the centres' preparations. Do members have any suggestions of solutions that you have seen and that could work as an alternative to the current Excel file? One requirement is that the solution is hosted in the EU to meet data protection requirements.
Hello! I am doing Master of Laws in Criminal Procedure in São Paulo (Brazil) and my dissertation involves data protection according to its sensitivity aspect. The main objective of my dissertation is to study the classification of data, mainly the sensitive category, i.e., when this category came up, in which context this category came up, how can I define sensitive data, which parameters could be used to conclude that a data is sensitive, etc. Does anyone have/know any article/book related to this subject? Tks
I have downloaded some tools from these links: http://www.acunetix.com/download-8991-2/
but I'm not able to generate reports from these free trial tools.
Hello people,
how can i evaluate the security on IoT
Any suggestion, resource or comment will help
Thank you!
I need to used Weka tool for analyse the anonymization algorithms
I am interested to find the advantages for each method , because there are a lot of researchers used k-anonymity and they were make many enhancement to it. Some others worked on l-diversity also for protecting sensitive data privacy.If anyone know other techniques I appreciate that..
Hello everybody,
Which algorithm of anonymous data is more useful to preserve a big data from a data analizer. For example I want to outsourcing a database for data mining. What is the effective data anonimizing in the big data to protect the personal infomation?
The future data protection package includes a General Regulation and a Directive on the protection of individuals with regard to the processing of personal data by competent authorities for the purposes of prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and the free movement of such data.
However, the data protection package initially leaves unaffected Prüm regime as was pointed out by the European Data Protection Supervisor (Opinion of the European Data Protection Supervisor on the data protection reform package, 7 March 2012, 443, page 68 ).
The Amendment 6 of EU Parliament (14 March 2014) introduced it. (EP legislative resolution of 12 March 2014 COM(2012)0010 – C7-0024/2012 – 2012/0010(COD)) Today (4 December 2014) is in discussion within the Council (http://eur-lex.europa.eu/procedure/EN/201285)
I am interested in know any comments or articles regarding this question, thanks!
Hi All. I'm Khairil from Malaysia.
Currently, I've being working on developing a model / framework regarding the Data Leakage Protection for government sector. Does anyone have references? Thank you so much for your help.
Patient clinical data is private and acquiring it for research especially by researchers outside the affiliation to healthcare organisations can be close to impossible. Even when such data is anonymised.
Can patient data be simulated? Will it then be considered applicable for use in a research?
Are there any samples of simulated data which can be tested?
Dear colleagues, we are looking for data on informality among firms in Latin America, and comparisons to other emerging regions.
In particular, we would like to know estimates of the share of firms not registered (while they should); or firms paying taxes, etc...
Apparently, some household surveys include the question, but we would favor firm surveys.
Many thanks!
Note: The WB Entreprise survey only surveys formal firms, and how they compete with informal ones.
The data protection can be studied in the two different angles, that is technical and legal. Can we consider data security as a technical issue and data protection as a legal issue?
Using file entropy or the chi square test seems to generate too many false positives (i.e., encrypted files are reported as unencrypted).
Perhaps one can use the FRSS score mentioned on page 12 here (https://www.utica.edu/academic/institutes/ecii/publications/articles/A0B3DC9E-F145-4A89-36F7462B629759FE.pdf), but I'm not sure how to apply that patch to SleuthKit.
Any ideas?
My main concern is how to identify individuals who are prone to manifest psychological or social problems even in a well-managed and friendly working environment, without infringing their personal space and having in consideration data protection issues.
Is jurisprudence of privacy law different from jurisprudence of data protection law?
Like techniques for ensuring the integrity of data on the server-side (the service providers) or during the transfer of the data.
In Clinical database management system, researchers can handle enormous patient records. Those records may consist of sensitive information. How to preserve the individual privacy of patients in clinical data management and biobank?