Science topic

Database Design - Science topic

Explore the latest questions and answers in Database Design, and find Database Design experts.
Questions related to Database Design
  • asked a question related to Database Design
Question
4 answers
I want to make an Integrative database which contain Many table which are integrated to each other I want some Software or GUI or Platform which can easily create, integrate and make user interface from data base.?
Relevant answer
Answer
My favorite is OpenBioMaps :D which was designed to create an manage biodiversity related databases...
  • asked a question related to Database Design
Question
31 answers
What kind of scientific research dominate in the field of Big Data database systems?
Please, provide your suggestions for a question, problem or research thesis in the issues: Big Data database systems.
Please reply. I invite you to the discussion
Dear Colleagues and Friends from RG
Some of the currently developing aspects and determinants of the applications of data processing technologies in Big Data database systems are described in the following publications:
I invite you to discussion and cooperation.
Best wishes
Relevant answer
Answer
Informative question
  • asked a question related to Database Design
Question
5 answers
We are looking to implement a web-based lab notebook as well as a tracking system to upload various assay results for several analogs of a parent compound. We will need to keep very close track of lot numbers, dates received, chemists who synthesized them, ect. Does anyone use a service which would be helpful?
Relevant answer
Answer
Hey
SCINOTE
  • Intuitive and easy to use
  • Inventory management and MS Office integration
  • Automatically generates reports & manuscript drafts
  • Exports all data in a readable format and API
  • Free account option
  • asked a question related to Database Design
Question
8 answers
Hello everyone
I am working on the plankton diversity of freshwater. Other than the research articles, we can make monograph. Recently I knew about database management system (DBMS) and I can make database using my excel data and photograph but i do not how to make database management system? is anybody know about DBMS and how to make online to get everybody access. please share your knowledge.
Relevant answer
Answer
Since you can store your data in excel file already, it means you are dealing with structured data which makes your problem even simpler. You may use the following guide:
1. Database: I recomend the use of MySQL to store your data and use Structure Query Language (SQL) to save and retreive data from it. MySQL can work with Linux & Windows servers. Also, it’s very light and open source.
...and since you want to make you system accessible online, then you need the following to make a complete system:
2. Serverside language: you may choose php, python or pearl etc. Any of this programming languages will help your web pages communicate with the server and your developed database.
3. Client-side: If your target is just the basic view, just use basic HTML or XHTML to design your page. Here, you may need to add some few things such as javascript, css etc. to get an advance user viewing experience (which you may not need at this stage).
To make your application accessible online after development, you need to communicate with a hosting company to subscribe and also get your own domain if at all you need to have your’s.
Hope this helps, good luck.
  • asked a question related to Database Design
Question
9 answers
More than 40 years later, and there is still no standard for the ER model. Both researchers and industry say that is a popular database design tool, and it is still a must-taught topic in almost every basic database course. So, there is a need for clarifying the motivations for this state of affairs.
However, it is fair to say that it is not clear yet if it is viable to have an ER model standard as a design language. Therefore, I could ask: is it?.
Relevant answer
Answer
I agree with you. Probably we need an official/standardized UML Class Diagram notation for ER modelling, but it will take time.
  • asked a question related to Database Design
Question
7 answers
Hi,
I have read that hybrid relational and nonrelational databases are a good approach for IoT applications where real time analytics are needed (correct me if I'm wrong)
For example, MySQL (relational) and MongoDB (nonrelational)
Which is your opinion about this and there is some paper or information somewhere where I can read more about this?
New to databases here, all information is welcome.
Regards,
Relevant answer
Answer
Here is my short answer to your short question: Time Series database, if no t that, then NoSQL type database.Here is the long answer.
It depends. I am sure my peers may have different opinions on this, so I qualify my response as the simple opinions of a DBA who has started to enter the IOT arena from a business point of view, vs. scientific. I present to you a long rant, but you did ask about databases and I am a dba.
Closer examination of your requirements will bring you closer to a solution to consider. For example, as a peer previously stated, there are choices of database TYPES - relational, NoSQL and I add, Time Series. Each type has it's own advantages but no one is better than the other, hence your own requirements will lead you.
Generally for IOT, you want something lightweight (not much overhead and does not require significant resources), that can accommodate many small transmissions, over and over again, so as to become large by volume. I would start by considering the time series database: https://en.wikipedia.org/wiki/Time_series_database, which is more process efficient than the traditional relational database. You really do not need relational databases for IOT unless you are going to take advantage of relational specific functionality, as relational databases require you to send data in a stringent (perfect) format to match the record definition in the database.
NoSQL databases, do not require you to stick to a predefined or stringent format to insert data. Just send what you have and in it goes. To cross reference the difference between the relational and NoSQL database goes something like this: I send first_name, last_name, address and zipcode into a NoSQL database. In it goes...The next record I send to the NoSQL database only has first_name and last_name. In it goes to. Try the same thing with a relational database and the second record will be rejected as the data is incomplete. The trade off is, in the NoSQL database, the second record will never show up on a query (search) using address.
So I close with a few requirement questions, which I hope can bring you closer to what you are seeking:
1. What kind of data are you collecting? Is it vital to your business?
2. How important is each piece of data you collect? It's it monitoring life-support or simply the temperature of a pair of sneakers on a pallet?
3. Which component does the actual data interpretation? Asking if the database is to make sense of the data or simply serve as a collection repository for an application program to extract the data for processing.
4. What happens if you lose the database? Can you easily recover or do you have to start over. Terms such as High-availability and Disaster recovery are common considerations for database processing. A single silo of data is a single point of failure. Whether or not your choice of databases offers redundancy should at least be considered.
4. How often do you send the data? Who knows, if the data is collected and bundled before sending to a database in a burst, then maybe a relational database has value, then again...
--Best of luck,
John
  • asked a question related to Database Design
Question
1 answer
It's the problem I have to solve, I don't have much knowledge about Distributed database, can any one help me?
Relevant answer
Answer
your Question was not clear enough so you could get help
However, I recommend you this publication of mine you might find usefull
  • asked a question related to Database Design
Question
3 answers
Hello.
I am studying on plant breeding. In my lab, the others usually are performing researches related to molecular biology and genetics. As far as I know, general process of research is studying on pathway in model plants, choosing several genes, searching the information of the genes from other species in the public databases, designing some primers, performing PCR and cloning, inserting into vector and transform to same plant or model plants, and confirming the role of the gene from transformed plant and upload the sequence to the database. However, it takes long time for the transformation process.
Nowadays, I want to analyze the relationship the orthologous genes from several species. For example, A, B, and C are included into the same tribe, but the orthologous proteins are only found in A and B, so I want to know why there is no orthologous protein in C?
In addition, I wonder that it is possible to submit the article by analyzing the relationship between data only from public databases without my own sequence. I think that kind of studies would be many in ecology or evolution area but I can not find the research that is suitable to me.
Can you recommend some fit articles or journals?
Relevant answer
Answer
Hello Sang, The following tools & analytical methods might be helpful.
Phylogenomics of plant genomes: https://doi.org/10.1186/1471-2164-9-183
Good Luck
  • asked a question related to Database Design
Question
5 answers
I am currently working on a retrospective analysis of a patients database, which includes demographic variables, and other regarding healthcare (especially intra-operative and post-operative outcomes). However, it was not properly designed (6 years ago), therefore variables are not operationalized and some data is missing. Any recommendations to fix this old database and deal with the missing data?
Thank you.
Relevant answer
Answer
Hello,
Are there any proxy measures you could use which might reflect the data for which you are searching?  For example, if you were looking for postoperative nausea and vomiting but some of the data were not coded for this, perhaps the administration of an anti-nausea agent implies the patient had this condition.
Best of luck on your study, the previous two answers are both quite valuable as well
Rich
  • asked a question related to Database Design
Question
2 answers
Hi everyone,
For a global scale project I'm working on right now, I'm wondering if I can use Net Primary Productivity data as a proxy to potential post-fire recovery. My basic thinking here is that pre-fire NPP conditions help post-fire recovery, i.e. facilitates it in the case of high NPP levels, which would mean that existing natural conditions help vegetation to thrive.
Obviously, many other factors condition post-fire recovery, but I'm looking for references that may validate or invalidate this thought. I was not able to find anything so far. Any insights?
Relevant answer
Answer
In issue in your research design is the selection of NPP as the independent variable and recovery as the dependent variable. How are you measuring the NPP (actual, potential) or are you intend to predict NPP? How do you define recovery? Pre-fire biomass, NPP or species (diversity)? Recovery in all three features is by definition fast in pyro-climaxes on the millennium time scale like the Mediterranean biomes (maquis; garigue, fynbos, savanna, steppe) and those on the century timescale (taiga, heath). Further you may consider your time scale.
An issue you may consider in less frequently burning biomes is the whether the pre-fire plant growth is N or P limited as N is removed from the system by fire.
  • asked a question related to Database Design
Question
1 answer
Currently we've been photocopying them and folding them in half and paper-clipping them to the copy.  We just started scanning them into a database rather than photocopying, but that still leaves the original chart to take care of.  Do they have a giant disc spindle just for charts?
Relevant answer
Answer
Hi Alec
Many people choose to do this in different ways. You can use a wall mounted spindle rack, but this is problematic, in that it takes up a great deal of wall space, and if you want to retrieve the chart disk at the back of the spindle, you have to take them all off the spindle first. Probably the simplest way is to use chart trays, such as shown on the link I have added. You can then stack trays as needed to allow for bulk storage, and relatively easy chart retrieval. I hope that helps.
Regards
Bob
  • asked a question related to Database Design
Question
6 answers
I need to set up a database with the following requirements:
- easy handling (data entry, read out / statistics, changing the data entry interface and adding new parameters)
- in best case it should be web based --> the db file is on a server in our network and can be accessed (for data entry) by a web browser
What do you recommend? Filemaker or MS Access, or any other solutions?
Relevant answer
Answer
If you have to build everything from the beginning and if it's web based - Firstly you have to design database (E/R model for example), to define the tables, after that to define primary keys, foreign keys and references between the tables inside the database, to create tables and relations with SQL commands and in the end to start entering data and make requests after that.
  • asked a question related to Database Design
Question
2 answers
I have been working on an aspect level sentiment analysis project that takes travel reviews as input and perform sentiment analysis to identify travel aspects (scenery, ambiance, accommodation etc) and assign sentiment scores for them.
I wish to build a web app that lets users to search for travel attractions using natural language (eg:-calm and relaxing beaches), which will provide them with a set of travel attractions (eg:- Unawatuna beach, Hikkaduwa beach etc) that match user search query, displaying sentiment scores for each aspect of the suggested travel attractions.
My problem is selecting a data persistence approach to persist the sentiment analyzed data, that will help inference of these user queries. I think RDMS will not be much of help. (I can be wrong.) I would really appreciate an explanation as to how to choose a suitable data persistence approach for this type of a use case. Thanks in advance.
Relevant answer
Answer
RDBMS will be good unless you are working with huge data that should be stored in DFS and processed with Hadoop. For a survey on sentiment analysis, please have a look on http://www.sciencedirect.com/science/article/pii/S2090447914000550
  • asked a question related to Database Design
Question
3 answers
Does anyone know of person who might be able to assist in developing a database of prisoner re-entry resources that can be developed initially for Pennsylvania and then potentially expanded nationwide? We are interested in a potential volunteer to develop such a program at the start. Perhaps a retired IT specialist looking for volunteer opportunities. 
Relevant answer
Answer
Thank you, Uvernes. I will pass along this link to my neighbor who is working on this problem with the Department of Corrections here. He is Rich Jacobs ( richjacobs12@gmail.com ). Ultimately a nationwide network of links is needed, available to released prisoners, and for companies and agencies committed to successful reentry into society. The initial step would be developing or expanding what is a available locally to a wider region. Much of what is available may be outdated or limited in scope.
  • asked a question related to Database Design
Question
3 answers
I have an original dataset with 114 instances and I want to know how I can split it into two sets, the first will be for training and the second will be for testing. in literature I find respectively 75% , 25% or 66% , 34% . Can I have a standardized method?
Regads
Relevant answer
Answer
Divide the data into training & testing. General rule is to divide the data in to 70-30.
Train the model on 70% data. Test the model on 30% data.
Try more combinations, like 50-50, 60-40, etc.
Then Cross Validation is used to check the robustness of the model. For that K-fold Validation is used.  Observe the performance of the model you repeating it 10 times (10-fold validation) and plot the graph.
  • asked a question related to Database Design
Question
7 answers
if it is possible please suggest me methods to optimize the database connections of a particular website 
Relevant answer
Answer
It depends on the database itself. In software development particularly web application, you can't have optimised database straight away. Off course, the normal practices should be applied and the database should be optimised as it has to, but what I have learnt, some of the attributes of various tables are differently manipulated to that when it was designed. It is always a good idea to revisit most of the database as there is always a scope for improvement. 
Coming back to your question, more detail is required to get a precise response.
  • asked a question related to Database Design
Question
5 answers
Dears,
I am interested in comparing different phytosociology relévé realised in different area.
I wish to know what are the basic requirement to apply an DCA analysis on a set of data.
I'm using PAST software and there is for example a simple set data design, please could you tell me what is missing?
Thank you very much
ABCD
1 2 1 1
1 1 1 1
2 1 1 1
2 1 1 1
3 2 2 1
3 3 3 2
where A, B, C and D are different plant species in different areas.
I wish with this analysis to gather those species whose abundance a re similar in the related areas.
Relevant answer
Answer
Dear Researcher
Please check this attachment if it is helpful for you.
Regards
  • asked a question related to Database Design
Question
5 answers
I want to construct an online data base on medicinal plants studied in my country, anyone with experience on this? I want researchers to be able to search online and get information on what medicinal plants have been studied, compounds Isolated and their bio activity. I have never done this so I would wish to get experience on cost associated and even how to start?
Relevant answer
Answer
Hi
I suggest you to create an online database for your data, such as Zoho creator or other free website that allows you to do this. Or if you want to create your own website for this, maybe having a computer engineering/computer science student can be useful so this way you can have someone with knowledge to formulate a website and put your database on it, and it'll be customized because you can discuss the way you want it. 
  • asked a question related to Database Design
Question
3 answers
For example, the following nested SQL query cannot be unnested:
SELECT AVG(salary)
FROM
( SELECT DISTINCT department, salary
FROM emp
)
Relevant answer
Answer
you can use Group By Clause
For example above same query you can execute like:
SELECT AVG(salary),department
FROM emp
GROUP BY department.
Try It.
  • asked a question related to Database Design
Question
5 answers
I have mapped XACML to a relational database. Hence, I need to know the future work in XML Security.
Relevant answer
Answer
 Thanks Seetharamulu Banoth Sir. I will search about ABE for security.
  • asked a question related to Database Design
Question
9 answers
Hi!
I have a question about Selecting Type of Database for a system which may encounter to a BigData in future!
Here is my questions:
  1.  Which one is better? SQL Databases Or NoSQL databases?
  2. If SQL is better! Does an infinite Table Structure (Relational Database) can make problems such Bad System resource usage? Or not? (in Other hand One table is better or Multiple (infinite) Table design?) for example, Consider facebook database for likes! is it better to we have a table for each post to store the likes of it! or It is better to we store all of the facebook likes into Only one table?
Thanks.
Relevant answer
Answer
Even though structured databases have become big enough and qualify as big data, true big data is mostly unstructured is the opinion of some of the Big Data Ninjas - meaning, if the data is structured, it not truly Big Data! (let's skip this aspect for the time being!)
The question of SQL or NoSQL always comes up in this discussion and this is subjective to the kind of data that is being handled. The issue is if you have a constantly changing data and the database requires quick scalability, NoSQL is ideal.
The big data is unstructured NoSQL, and the data warehouse queries this database and creates a structured data for storage in a static place. This serves as our point of analysis. The queries can be run as often as necessary.
The data required for analysis can be stored in SQL. So NoSQL and SQL can and must be used in tandem for best benefits. NoSQL the benefit is quick scalability ... SQL is structured and standardized and can be scaled (may be not as fast as NoSQL but depends). So currently most industry experts prefer to work with both as the need requires.
Regards,
Dr. Akilesh. R
India
  • asked a question related to Database Design
Question
9 answers
what is their differences and when we should use xml databases instead of relational databases?
Relevant answer
Answer
In addition to things said above, XML was classically designed to make data portable on the web, whereas relational databases are an infrastructure to store data in one place. For that purpose, JSON is used more nowadays, because XML is a quite "heavy" format due to it's self descriptiveness. In my opinion, if your data is not strongly hierarchically structured, a relational database is probably the way to go (PostGresQL for example is a very good and free relational system)
  • asked a question related to Database Design
Question
2 answers
My modeled process has binary outcomes and site and region grouping. I am modeling it with a generalized linear mixed model (using GLIMMIX) and a diagonalized covariance structure. Typically I see covariance component based formulas for ICC calculation in the literature. GLIMMIX gives me the covariance of the site indicator and the region indicator variables but not the error variance I need to fill out the formulas. I have seen a suggestion to use 3.29 (pi**2/3) as the error variance when the dispersion is near 1.0, but am not sure this is right.
Relevant answer
Answer
Hi David,
I suggest you check out Larry Madden's home page and maybe shoot him an email (you can tell him I suggested it -  we're old friends). Larry has spent a lot of time developing techniques for handling binomial data with high intra-class correlations.  He has some SAS routines and other software on his web page but he is always interested to hear from people looking at this sort of issue:
He has a ResearchGate profile too:
Yours,
N
  • asked a question related to Database Design
Question
5 answers
Microsoft access is a software example for relational databases. I need more examples for relational databases. I need also some more examples for Object oriented databases and XML databases.
Relevant answer
Answer
Caché, ConceptBase, Db4o, GemStone/S, NeoDatis ODB, ObjectDatabase++, ObjectDB, Objectivity/DB, ObjectStore, ODABA, OpenAccess, OpenLink Virtuoso, Perst, Picolisp, siaqodb, Twig, Versant Object Database, WakandaDB, Zope Object Database.
  • asked a question related to Database Design
Question
8 answers
I have a situation in which multiple effect sizes are drawn from individual studies therefore making it necessary to account for the lack of independence of effect sizes. I'm comfortable using Stata but I've never conducted a meta-analysis before. So, a step-by-step guide to the statistical analysis and data structure would be ideal.
Thanks!
Owen
Relevant answer
Answer
Hello Owen, this might be helpful:
The follwoing article form the American Journal of Evaluation is good too:
Multilevel Growth Modeling: An Introductory Approach to Analyzing Longitudinal Data for Evaluators by Kevin A. Gee, 35: 543, Feb 28, 2014,
Good luck !
  • asked a question related to Database Design
Question
8 answers
Suggest any tool for analysing database design for a spatio-temporal application are there any metrics to measure the design? Specifically for conceptual model (like UML or extended ER)
Relevant answer
Answer
I don't know about metrics specific to this issue, but there is a language proposed for spatio-temporal constraints for spatial datacubes proposed in a paper by one of my former PhD students who now works for Apple (and formerly for Nokia) in this topic. The reference is: Salehi, M., Y. Bédard, M. Mostafavi, J. Brodeur, 2011, A Formal Classification of Integrity Constraints in Spatiotemporal Database Applications, Journal of Visual Languages and Computing, Vol. 22, No. 5, pp. 323-339. It is available on my web site if not here on ResearchGate.
  • asked a question related to Database Design
Question
3 answers
see above
Relevant answer
Answer
If the CouchDB is better than the other NoSQL stores it would be question of the application and the feature needed for a specific scenario.
CouchDB is designed for offline operation; it uses multi-master asynchronous replication. In CouchDB, multiple replicas can have their own copies of the same data, modify them, and then synchronize these changes at a later time. Comparison of different SQL implementations can be found on: http://publish.uwo.ca/~kgroling/NoSQL%20and%20NewSQL%20survey.pdf
  • asked a question related to Database Design
Question
3 answers
NuoDB is a new database in the cloud. I am interested if you think that it can change the paradigm of the databases in a future.
Relevant answer
Answer
NuoDB is a NewSQL based Database Management System. It provides same scalable performance of NoSQL systems for online transaction processing workloads, while it is still able to maintain the ACID (Atomicity, Consistency, Isolation, Durability) guarantees of a traditional database system.
It is a distributed database design, meaning that we can run several Host for the database which will help us improve the performance of the database, and at any time if any host fails, the database is still available through other hosts. It facilitates an elastic scale-out and scale-in mechanism.
  • asked a question related to Database Design
Question
4 answers
I have been working on the development of a database regarding RNA classification. Now it is almost done. I have attached the link of the database. I have tested it on Mozilla and Chrome. Any suggestion regarding making it better would be much appreciated. Thank you all.
Relevant answer
Answer
Hi Debasish, great work.
I also working in something similar using mysql, my goal is a protein folding prediction, but Im still in the very beginning and I'm still learning about biochemistry but maybe we can share some info about mysql.
My skype is thiagobenazzimaia
 
  • asked a question related to Database Design
Question
5 answers
Government agencies have and continue to produce digital databases such as SSURGO & STATSGO (US Soils), Cropland Data Layer, US Land Cover, National Wetland Inventory, and USDA-NASS annual summaries. Are we able to calculate error with any confidence?
Relevant answer
Answer
@Paweł Weichbroth, I agree to your argument and would like to add:
True value of "something" can never be known.
  • asked a question related to Database Design
Question
18 answers
I have read a couple of articles which are trying to sell the idea that the organization should basically choose between either implementing Hadoop (which is a powerful tool when it comes to unstructured and complex datasets) or implementing Data Warehouse (which is a powerful tool when it comes to structured datasets). But my question is, can´t they actually go along, since Big Data is about both structured and unstructured data?
Relevant answer
Answer
It's very hard to answer this question in general without taking into considerations what your specific needs are. Also, "Data Warehouse" is a pretty general term which basically can mean any kind of technology where you put in your data for later analysis. It can be a classical SQL database, Hadoop (yes, Hadoop can be a Data Warehouse, too), or anything else. Hadoop is a general Map Reduce framework you can also use for a lot of different tasks, including Data Warehousing, but also many other things. You also have to bear in mind that Hadoop itself is a piece of infrastructure which will require a significant amount of coding on your part to do anything useful. You might want to look into projects like Pig or Hive which build on Hadoop and provide a higher level query language to actually do something with your data.
Ultimately you have to ask yourself what existing infrastructure is already in place, how much data you have, what the kind of questions are you want to extract from your data and so on, and then use something which fits your needs.
  • asked a question related to Database Design
Question
2 answers
I have a database that includes 200 websites and I want to help the whois database domain age from any website using MATLAB to get the software. I need to the database whois.
Relevant answer
Answer
Hello, you need to put a little bit more effort that just using Matlab. I would suggest using a bash script or Python and an API like this one https://www.robowhois.com/ (It has 500 credits - searches for free) to do your task. You can do it with Matlab, but you will have to hack stuff together to get it to talk to the API. I hope this is useful to you!
  • asked a question related to Database Design
Question
1 answer
UCI repository has one data source for each. However, for semantic integration different datasets from the same domain are sought.
Relevant answer
Answer
you must contact some medical lab for a number of patient datasets. In such scenarios, the research students contact such organizations at university level.
  • asked a question related to Database Design
Question
1 answer
I'm looking for further information (going beyond information given in e.g. GEO publications) on how databases like NCBI-GEO (http://www.ncbi.nlm.nih.gov/geo/) are set up. I'm curious how such a system can be designed in order to allow growing numbers of experiments, samples and expression data - without the need of redesigning a database layout, when a new type of data is available. Would be glad, if you could share resources, examples, detailed explanations or your own experience - anything welcome ;)
Relevant answer
Answer
NCBI-GEO are built are real time data experiments which were performed to find gene expression of various tissues. One can find gene expression data in following sites :
  • asked a question related to Database Design
Question
2 answers
I made a C compiler using bison/flex and a postgres database schema to gather and link genome data extracted from EMBL or GenBank (GBK) files. After that, one can ask SQL queries and explore several genomes at once. These tools were useful for our research group in the last three years and I thought it can also can help other people.
The tools are available through direct download from sourceforge "dot" net or subversion checkout. Please, if found a bug let me know. About the subversion, I can add new developers to the subversion project in order to improve the parser.
Access the tools via the annexed link.
Relevant answer
Answer
Just to conclude this subject, we had all to create a web page to handle this, so we did. The paper can be found in my publication list named "PANNOTATOR: an automated tool for annotation of pan-genomes." Just a small detail about the input files: the tool is no longer expecting specific suffix files (fasta, embl, gbk). I hope it can be useful.
Best regards,
Anderson Santos
  • asked a question related to Database Design
Question
1 answer
I need to know the best OODB theory to compare about EASY design and very well to use.
Relevant answer
Answer
I think OODBs are not a good option. The OODBs boomed during the decade of the 90's and have now reappeared. However, unresolved theoretical problems are presented. A good critique of the OODBs is presented in the book of CJ Date, An Introduction to Database Systems, in its 7th edition.
There are some classic books on OODBs, such as the book of Elisa Bertino and Lorenzo Martino, Object Oriented Database Systems.
  • asked a question related to Database Design
Question
7 answers
Is there even a standard notation?
Relevant answer
Answer
There is no standard notation for ER diagrams, as happens with UML. Probably that's one of their weaknesses. There is an attempt by the OMG, for creating a metamodel for the ER model. The specification is found in Volume 2 (extensions) of CWM (Common Warehouse Metamodel).
Moreover, it is necessary to distinguish between model and diagram scheme. The model is a set of tools and concepts to represent a part of reality. The scheme is an instance of the model, while the diagram is the graphical representation of the schema. It is therefore possible, at least, "to graphically represent" the schema with multiple visual constructions, eg UML class diagram. An illustration of this situation is the variety of graphical notations that exist to draw diagrams through Chen's notation.
  • asked a question related to Database Design
Question
8 answers
The amount of data that is processed annually has exceeded the zettabyte boundary. This is a number that would only be mentioned in highly theoretical articles 2, 3 decades ago. Such insurmountable amount of data gave birth to a new term : big data. What do you think is the most important tool that will allow us to handle this explosion of data?
a) Do you think, it is the increase in the performance of CPUs, which is currently significantly slower than the growth of data?
b) Do you think, it is the new programming languages being introduced, which will make processing of this data much easier?
c) Do you think it is some novel data analytics algorithms that will shed light into significantly easing the handling of data?
d) Or, anything else ?
e) Or, are we dead in the water and there is no hope ?
Relevant answer
Answer
These are practical approaches to deal the BigData.
(1) MapReduce (HADOOP),
(2) BSP (HAMA)
(3) STORM (Real time processing)
  • asked a question related to Database Design
Question
14 answers
I create a view with joining 3 tables--
1)
User Table--
CREATE TABLE `rb_user_details` (
`uid` bigint(255) NOT NULL AUTO_INCREMENT,
`uname` varchar(255) NOT NULL,
`password` varchar(255) NOT NULL,
`fname` varchar(255) NOT NULL,
`lname` varchar(255) NOT NULL,
`email1` varchar(255) NOT NULL,
`email2` varchar(255) NOT NULL,
`user_image` varchar(255) NOT NULL,
`address1` text NOT NULL,
`address2` text NOT NULL,
`phone1` varchar(255) NOT NULL,
`phone2` varchar(255) NOT NULL,
`add_time` int(11) NOT NULL,
`status` int(11) NOT NULL,
PRIMARY KEY (`uid`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1;
2)
User Role Table---
DROP TABLE IF EXISTS `rb_user_role`;
CREATE TABLE `rb_user_role` (
`id` bigint(255) NOT NULL AUTO_INCREMENT,
`uid` int(255) NOT NULL,
`role_id` int(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=latin1;
3)
Role Table---
CREATE TABLE `rb_role` (
`role_id` bigint(255) NOT NULL AUTO_INCREMENT,
`role_name` varchar(255) NOT NULL,
PRIMARY KEY (`role_id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;
And I create a view by these 3 tables like..
CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `rb_vw_user_details` AS select `rb_user_details`.`uid` AS `uid`,`rb_user_details`.`uname` AS `uname`,`rb_user_details`.`password` AS `password`,`rb_user_details`.`fname` AS `fname`,`rb_user_details`.`lname` AS `lname`,`rb_user_details`.`email1` AS `email1`,`rb_user_details`.`email2` AS `email2`,`rb_user_details`.`user_image` AS `user_image`,`rb_user_details`.`address1` AS `address1`,`rb_user_details`.`address2` AS `address2`,`rb_user_details`.`phone1` AS `phone1`,`rb_user_details`.`phone2` AS `phone2`,`rb_user_details`.`add_time` AS `add_time`,`rb_user_details`.`status` AS `status`,group_concat(`rb_role`.`role_name` separator ',') AS `role_name`,group_concat(`rb_user_role`.`role_id` separator ',') AS `role_id` from ((`rb_user_details` join `rb_user_role` on((`rb_user_role`.`uid` = `rb_user_details`.`uid`))) join `rb_role` on((`rb_role`.`role_id` = `rb_user_role`.`role_id`))) group by `rb_user_details`.`uid` ;
But After Creating it returns----
the right result set but it gives a message--
`rb_vw_user_details` does not have any primary key....
What is the problem behind it not to create a primary key...
please Solve it
Relevant answer
Answer
Junaid Akhtar is right – views in any database system have no PRIMARY KEYS. There no method to create physical PK as it You create on the table. Views use Primary keys from source tables.
Your schema realized many to many relationship. In it the table RB_USER_ROLE is connector for two dictionaries RB_USER_DETAILS and RB_ROLE. Message You obtain is not very important, but to omit it You can add PK from main table RB_USER_ROLE. Id to the view definition. It will be take part of logical PRIMARY KEY.
  • asked a question related to Database Design
Question
5 answers
For an Integrative database it is highly desirable to make an easy find option. Can anyone suggest any examples?
Relevant answer
Answer
The best looking interface depend on several factors, but you should consider especially spacing, positioning, size, grouping. All these factors have an impact on a better visuality. Also the user should be notified about what is happening in the background
  • asked a question related to Database Design
Question
7 answers
Is there any sophisticated and simple software for creating a biological database?
Relevant answer
Answer
If it is for sequences and related data, CHADO database scheme from the GMOD project provides a good start. This is qnot really software, just a common standard. It is best suited for data storage for genome browsers or websites, because of existing scripts and other CHADO-aware software. Needs PostgreSQL for running and Perl for data preparation, loading and reporting. This would be one of the more complex solutions, I suppose.
  • asked a question related to Database Design
Question
1 answer
.
Relevant answer
Answer
With hash indexes you can only do the equi-join:
select * from A,B where A.C = B.D
but not
select * from A, B where A.C >= 0.95 * B.D and A.C <= 1.05 * B.D
Hash indexes are way easier to implement.
Komplexity of a B-Tree search for one value: O(log(n)).
Komplexity of a hashtable search for one value: about O(1).
Regards,
Joachim
  • asked a question related to Database Design
Question
3 answers
I am searching for a Multi-attribute Index (like the k-d-tree) which is balanced and robust. It should be able to store values from -infinity to infinity.
Relevant answer
Answer
It depends on the number of dimensions you need to index and the volume of data you need to handle. If the number of dimensions is not too large (<20) an index such as the k-d-tree would be fine, but if data volume is large, you would need to use a paginated version of it, such as our Q-tree which has been developed by our research group (gim.unex.es) at the University of Extremadura. However, if the number of dimensions is larger than that, you could consider access methods that fight against the curse of dimensionality such as the VA-File or Localy Sensitive Hashing. Also it depends on the kind of searches you are planning to run against the data. All the mentioned structured incorporate nearest neighbor searches.
  • asked a question related to Database Design
Question
7 answers
I want to compare diff database solution with respect to distributed database features availability.
Relevant answer
Answer
Can you write up some details as to how and under which scenarios you want to use the database solution?......Generally, we have Oracle 11g and IBM DB2, but these are for really big enterprises......An alternative to MySQL is PostgreSQL.....Also there are quite a few different types of storage engines in MySQL suited for different needs......One other dimension is databases for Mobile Devices where you have SQLite. So, its basically more about the usage.....I can help you out on that