Friday, May 9, 2014

Sample Assignment on DBMS : History & Evolution of DBMS www.sampleassignment.com

History & Evolution of DBMS 
Table of Contents

Introduction

Storing and managing database is one of the most primary aspects to maintain data in an organized fashion. Previously during absence of Information Technology, people used to maintain data manually like organizing data with cards, notes etc. This was feasible maintaining smaller amount of data like indexing a small set of books or records. But when data grows bigger, using manual process becomes a well-nigh affair to manage. Rather it turns into a daunting task to keep track of the same. Another disadvantage with manual data set is longevity. Data’s that are maintained manually are prone to damage and further decay. With this growing problem in maintaining records and owing to the inception of Information Technology which brought revolutionary changes in managing large set of data in electronic format, things not only became easier to handle but also enables a process to store data for a longer time (even life-time, if kept in a proper fashion). Moreover indexing becomes easier and one can access data from any part of the world provided they have access to the database. The coming of RDBMS (Relational Database Management System) has defined a new level where storage of data is formulated in a way not only to retain it for a longer time but also in a systematic fashion. As year’s progresses and so does the advancement of technological changes for further betterment, the database system has gained newer heights where one can maintain gargantuan set of data without any hassle.
The aim of the paper is thus to provide a clear rationale behind the objective of database management system and its critical importance for maintenance of records.

What is Database Technology?

Database is any kind of storage of Data, be they in raw format, be they in machine code, in fields, in records, in files in tables or in any other fashion. It is conglomeration of data that has some kind separation defined as distinction marks.
Data are combined to impart some meaning into it through categorization and arrangement. A data arrangement is a database. At this stage the basic agenda is to store.
Data base management is a little different. It is an arrangement where the motive is different. When Data is arranged with an eye to ease of retrieval, we talk about Data base. The objective is just the opposite. One stores nicely to retrieve them easily at any later point in time.
Data base management has actually gone through a great evolution within a very short period. From the late 70’s the database technology has evolved from the software and from the hardware or platform perspective as well.
From the early days of big main frame machines data were arranged in flat sequential files to be followed by a DBMS we call hierarchical database. This arrangement mimics the old management style and organization structure. Data cells or elements are arranged from top-down structure where any particular cell does have one parent and one or many children. So to navigate to any cell, the user has to navigate along with the genesis or parental logic of the data(Booch, et al., 2007).
The next step was a Network structured database, where any cell can be a child of more than one parents and do have many children. The leaf most cells or units of data can have reverse connection to the head level data cell. The programmer can navigate to one cell through multiple data paths. It became faster, and to navigate very large databases (VLDB} this arrangement offers the fastest retrieval. However to insert, or delete one cell becomes a nightmare demanding many lines of code. Real life, real-time heavy transactions like banking operations where huge number of updates, inserts and deletes are undertaken this kind of arrangement is a killer in time. The load of so many users hitting the same database becomes prohibitive.
To solve this predicament, data base technology took the help of discrete mathematics or more pointedly the set theory and set algebra. Data cells are now arranged in simple heaps with their indices arranged in a virtual two dimensional “tables” in rows and columns. The entire set algebra operations are used to seek any cell (as a positional co-ordinate of <row id, column id>). It is remarkably simple and elegant. All kinds of operation in searching of the data cells, from various tables and stores can be collected through a language called Structured Query Language (SQL). 
Data are arranged in separate table following a mathematical structure called normalization where a row or a record would be formed with different cells or fields. These fields would be solely and exclusively depended on one field called the key field. There should not be any inter-dependency of the non-key fields among themselves. Such a key-led (called principal key) row structure of repeating records form one table. Many such tables are drawn and they are designed to have relations that are the principal key of one table can be a member of another table. This is known as foreign key. That is very akin to a structure where the second table would be like a child table of the former. Through this system any table can be made to relate to any table on the fly where in physical reality the data records are actually stored in a heap structure. Any record or for that matter any cell is known by the principal key of the record. There is a special name for this kind of structure; it is called a duplet with <key, cell name>. This arrangement however is the data of the data or the meta-data that means it is the structure that is imposed on the data value. A data value only has its significance with respect to the meta-data. A meta-data therefore describes the data in the physical value. Very detailed logic has gone into developing the language of navigation.
Relational database management system had the longest surviving life in the evolution of Data base technology. Even today what are used in every industry are different versions of RDBMS(Date, 2003).
But industry needs have gone much beyond this. The exigency of handling VLDBs, of extremely fast retrieval, of handling many applications that are apparently independent, and that of connecting data tables on-the-fly, has led the designers to go beyond. In came the Object Oriented data base management system. This arrangement is the result of the arrangement of the system that is found in nature. Everything in the natural world are objects. Each object has some characteristics and has some methods or protocols that it exposes to others to collaborate, there are methods for the friends, for the children, for the parents. But they have some internal features which the object might want to protect only to be exposed for other members within the object or for others. Obviously objects have internal objects too. Or objects can pile up to become parts of a super object. Objects can inherit other objects and can extend friendship with others. Objects can have multiple inheritances. Objects have many methods encapsulated and a method might mean something to one set of friends but would mean different to the other set that means the same method would work differently with different objects outside. This algebra did revolutionize Database technology by which the nature can be replicated and mimicked within a virtual storage of information. This had triggered the revolution of application processing.
Object oriented Data base management system (OODBMS) is very effective in understanding and mimicking complex data structures in applications. But the old problematic of handling VLDB sustained and even aggravated. A compromise was struck and Object Relational Database Management System (ORDBMS) was applied. Today all the vendors of DBMS use this structure (Booch, et al., 2007).
While this was revolution was going on, there came up another exigency- that of analysis. A data item has many ways of viewing and the same data element has qualifiers, for example Sales amount in dollars and sales volume in quantity can be qualified by the shop, by the product, by the region, by the company etc., the list goes on and on. A retail outlet sells many products from one to many of their outlets and they are further classified in terms of unit-of-measure, packets, they go in combination with other products. These sales are undertaken by sales people, by the regions, by the country, by many other qualifiers. This sale figure can be rolled up a hierarchy or drilled down the hierarchy. The sale figure is termed as a “fact” and the qualifiers are termed as “dimension”. A fact is only meaningful with respect to a dimension. This style has ushered the new revolution of Analytics.
Every business is ultimately measured through some metric. By a metric an analyst can compare the performances in a relative scale. Businesses communicate with each other and within their organization through metrics and the customers also understand businesses through metrics. Metrics are to be figured out through Data Analysis. Analysis of facts with respect to various dimensions, the aggregate values of facts calculated through complex calculations help generate the analysis; these are then depicted in charts and graphs. Decision makers can understand and can publish them for others through these representations. People can extrapolate data and figure out the trend and the position where the results would be if there are some variations in the output- this is called What-if analysis. Business now has gone numerically oriented.
The analytics revolution has brought us into advanced mathematical levels where people are interested in statistical treatment and they would like to know how far they can trust on a set of data in corroborating some hypothesis, some assumptions and some target.
Universally the social requirement of business is to know the functional representation of facts with respect to the most significant dimensions. This is a famous problematic of today’s   business world. In order to do that many techniques have developed. The entire knowledge pool of mathematics has come to help data analysis. Today we are coming out of the regime of fixed assumptions. Neural networking techniques with genetic algorithm have allowed us to form the right equation and the correct result through advanced simulation. Statistical and Stochastic databases and genetically programmed data structures are now giving us solutions that could not have been done manually.

Conclusion

Database technologies have gone beyond their initial agenda or storing and retrieving. We now have come up with solutions, solutions from data that are unstructured. Previously we formed the data structure that is the metadata and tried to fit in the dat. Now we mine the relations of the data from datagrams without any prior structure. We now evaluate a data pool, figure out the inter-relations, figure out their innate functional representation and can extrapolate the future probable value with respect to some level of confidence. Data base technology now has opened up a new world of scientific investigation with the business generated data. The journey has not stopped and newer technology is coming out solving problems that human civilization could not surmise. The discovery of Chaos theory is changing the knowledge level to much a higher extent.

Data within organization

An organization is characterised by the regular transactions that is generated every day through their routine operations. These operative data are collected and are processed locally and the final aggregated results flow up the organizational hierarchy. This is the approach of the age old Management information system. The entire organization cannot access the atomic data elements in any application and cannot re-use them even when it is extremely needed. The same data or the primary transaction had to be entered in various applications. The duplication is a big loss of effort, money and accuracy because one little human error expands to unimaginable proportions through the system.
The solution had to be evolved over the years. Data from different applications were pooled in one data store called Operational data store where rationalization over all applications data was undertaken and stored in a universal format. This data was then fed to the various applications and shared through other applications on a need to know basis and on strict user access privilege.
This data store then collates all the data into a different type of database where the aggregated data are stored classified by the dimensions. Data now resides in terms of facts and dimensions culled in from various applications. Any individual application can thus download the necessary data for their use and pool back the data after some possible modification.
Enterprise level warehouse of data is the newest and trusted structure in the business world. From this huge Enterprise level Data base comes out Data marts which are one fact and many dimension tables where each numeric fact of metric is stored with respect of all possible dimensions. From these data marts we form data cubes – a smaller mart with two or a few dimensions. This representation is then pictorially depicted in terms of three dimension cubes. Mathematics of transformation are used to see what happens if a fact turns into a dimension and another dimension ( transformed into cardinal or ordinal number systems) becomes a fact. All possible angles of view give analysts a holistic view of the data. The analysis is made simpler(Laberge, 2011).
In the present stage of development these EDW (Enterprise Data Warehouses) Data marts and data cubes are then fed into analytics software to evaluate the health of operations quantitatively and parametrically.
The spectrum is now complete – from atomic level data from operations are collected, processed all the way to appropriate pictorial description and analysed with appropriate weights. This gives a trend analysis and a future extrapolated value. The actual value when arrives is tallied with this and the gap found and analysed till the optimum result arrives. The final concept of optimization is the final stage and we want to know what the solution space isand where we can operate our business and not compromise in output, quality and profit.

New Technology aiding Regional Health Authority

Introduction

Regional health Authority is an organization that has many leaf level health outlets reporting to it. The total number of people that are served by these outlets is 1 million. They include primary care trusts and General Practitioners too. Regional health authority does collect data of the patients, their history and treatment meted out to them. However there are discrepancies. The General Practitioners are individual level agents who treat people but in most cases do not collect the data and these data are in such free format, that they are not recorded in RHA database, but some data do come in. Even those data do not get recorded in organized fashioned. The retrieval of those data has not been therefore possible and these data are not of any serious us in following up. The local trusts and the agents do their work on familial basis only, simply keeping everything in their head. The data has no value in researching ailment pattern and treatment pattern.
Technology is available to serve this kind of problem. Technology would combine hardware and software applications. I would discuss a solution as a suggestion.

Data Capture

We now have hand held devices that work with mobile phones. Every GP independently working or working with the local trusts should have one mobile device (most likely working with Android operating system) where a templatized application is loaded. Now, there are two possibilities, one is an application that is installed in their device- but this has its perils that are known. The latest technology is accessing the “cloud” – which means the person should click an URL that would take them to an application downloaded in their local machine, they would key in their user-id and password, to get an authorized access. A form will appear where the data need to be keyed in. Typical data format or template would require the GP to fill in patients’ name. As soon as the name is keyed in the id of the patient with the latest information collected from the patient in the previous visits, would appear. If the patient is new then the GP has to key in the necessary details like the name, NHA number, address, age, sex, ailment reported, ailment diagnosed, medicines suggested, and general note of the GP. The id and the date are automatically generated. Once such a screen appears with the necessary details the GP has to key in the new comments, and save the format. The application will record all these comments serialized in a history file with respect to dates.
The GP’s name would be automatically saved and if a new GP sees the patient, she will find out who saw the patient and what all the comments were.

Next step

This is the point of occurrence data that is captured. The application is installed in the cloud. The specific cloud application will store the data of the individual patients. To have these data usable, they need to store with some dimensions. In this particular case there is no numerical fact. But the comments are the basic facts that are qualified with respect to the ailments, day, principal diagnosis and the doctors. These four appear as dimensions to the fact –that is comment. This kind of arrangement puts the records in a certain fashion.
Every data that comes from different GPs are categorized automatically and stored in a grand table with the necessary dimensions ( in this application the fields that are captured are same) and the dimensions are also the same, which means that the meta-data capturing data from all the GPs are same. The grand table is a data mart which has the comment and various dimensions. This data-mart can be accessed by GPs in any retrieval. Data are categorized. An analyst at the RHA or any one with the right privilege can see the aggregations. Say for example what were the diagnoses for a particular ailment type? How many visits does a particular GP has charged per patient on an average? What are the number of days that one patient is detained on an average for a particular type of ailment? What medicines are generally administered on a patient?  An analyst can make a general mapping of how a particular local trust or GP is faring? Is there any standard treatment style that is emerging? And which doctor and why would she deviate from the standard treatment that is emerging? A pattern of ailments with respect to age is also evident from these treatments.
These information’s about ailment-age mapping, ailment-neighbourhood mapping, ailment-treatment mapping and ailment-medicine mapping are ferreted out from the transaction data. This is what is known as data mining. Data mining takes the data and finds out relations that are not possible to know before-hand.

How to get these?

The personal atomic level data can then be collated with the existing data that the trust collected. Those applications may be different. RHAs have inventory of medicines and consumables. There would be another application where records would be arranged for collecting the usage of consumables with respect to the doctor using that. Doctors’ performances with respect to using consumables, medicines need to be measured. This is more important for surgeons and for the emergency cases. How does one doctor use the medicines as compared to another doctor? This will give the result that some doctors use outlying amount of consumables, then there should be some standard features that can be discussed and if necessary imposed.
Another application would be a simple inventory management system for medicines and consumables. Some local trusts would have a perennial shortage of consumables depending on what they need and what they get. RHA had started a standardised quota system for the local trusts. But there would be some local trusts, who would need more than the average depending on the season and terrain and some special local conditions. Flood affected or disaster prone areas would need more of the consumables and some particular type of medicines for their use. One might find that there are a few trusts where things are in excess and are rotting. From the inventory application there can be an ABC analysis ( analysis of fast moving materials and standard moving and low moving materials). This analysis would reveal the economic ordered quantity to stock and their usage required.

Integration

These applications have different data. The inventory data must be integrally connected with the sales and distribution data, so that the local trust and the RHA can know unit does more sales or distribution and what is their demand. Thus internal transportation can rationalize the demand provisions among the different units.
The Challenge is to integrate these data with the data of individual patients’ usage. An Enterprise level data warehouse would have in one database all these information. The first step is to standardize the data type and length of the data for same fields across applications. This will help a single door place of insert and update, the same data will be seen by every application connected to the Enterprise Data Warehouse.
Various applications pool in data, standardize them in same format arrange them in proper tables and as the tables would have connections and stored procedures therefore these data can then be re-distributed to the applications for their use. The Data warehouse in turn can churn out data for aggregation studies and special studies generating reports with chosen formats. Analysts who are interested in the aggregation at the corporate level can then use them as they want.
Rationalizing the data coming from different type of databases can be handled through cross database utilities that communicate to and fro the databases. An Enterprise Data warehouse would be there in one single database technology. Here comes the techno-financial decision of the cost of licensing and the support of the database vendor in cases of disasters.
Writing stored procedures in the Enterprise application takes care of the to and fro data movement between two applications and databases. Generally the front-end application that connects different applications are known as Enterprise Application Integration.
Enterprise data warehouse at the back end and Enterprise Application integration at the front end can share data tables with applications and users with the proper privileges. The user security logic allows some users to see some specified data from the same table where others might see other fields or the whole field set. User access logic is installed right at the inception and that is part in the technology.
Reports are the face of the application that shows data outside the application. Modern data ware houses do have nice report writing and report generating sections that automatically show data in pre-designed report format or in customised report format. The user can simply drag and drop the fields she wants to see in a report space, declare the aggregation arrangement and design the place holders to have a report generated.
Reports can be externalized for use in other applications. A URL is generated that contains an XML file and that file can be send to an external application with proper authentication. This externalization is a novel way to integrate the data churned out of this application into the programming use of an external application.

How the organizational hierarchy and the data view can be implemented?

Organization hierarchy needs and deals with different data elements differently. HR and finance would be interested in the salary details that they might not want to share with everyone. Sales and Marketing would like to share prices of consumables and medicines that they might not want to share with others. Some information may be superfluous even for people at the highest levels. All these would depend on the user-access level privileges. Privileges are not simply associated with form or screen level rather now-a-days every data element or cell can be associated with specific user privileges Applications are so sophisticated these days that user access can be restricted on certain values of a field or data element, for example only a few people would see the those salaries that are below a certain value. This advanced filtering can have complex logic as per the set theory. Combination of logic imposed on various fields can be combined in complex logic and the required values are filtered. These logic are best installed at the database level and not at the application level. These checks are generally implemented with triggers which connect these logic and acts during the insert or update of any record. Only allowed fields and values will be shown to specified users. They show up in the forms of applications with those fields disabled or blanked out.
Data warehouse and Data marts is now very essential central core of any application. Data are first cleaned and rationalized so that all the anomalies are removed. Anomalies can come up in trying to push in data with undesired data values. Data cleaning is another important application. Here the data formats are checked and the boundary values are checked. Checks also take place with cross data field and cross values, for example if the person is a female and of an age more than a value then some medicines may not be allowed to be administered. If a person does have some ailments then they might not be admitted in the hospitals, if a patient is diagnosed with some critical ailments as diagnosed then they may be mandatory referred to a more advanced hospital. If the number of seats falls below the number required then any further admittance will have to be barred. These are business logic that need to be placed either in the application level or at the data base level.

Physical considerations

When a big number of users are using the application then it is exigent that after entering every field the value is authenticated by the database. This makes many database accesses and slows down the response time. Response time from any application is the most crucial consideration while designing a system. Programmers and designers try to reduce the data base accesses to the minimum so that the access is once and the response time is mitigated.
Generally forms in hand held devices connect the cloud through web. There is a web-server that takes care of the load balancing of the requests. In any kind of web application there are three layers the web-server, the application server and the database server. Each such server sends a request to the next level and waits for the response. The summation of these responses comprise the response time. A web server balances the load and attaches a handler (with a handling ID) and then collects all the requests coming from the users. Each terminal is identified through this handler id. A map is maintained with the web-server where the handler id is mapped with the browser and the terminal (terminals are now identified through their IP address). The application server collates these requests and groups them on similar requests. These similar requests are bunched and sent to the database server for fetching the values. Those responses are then served back to the application server and the request-response is thus resolved at that stage. The application server then redistributes the proper response and sends it back to the web server. The web server sends it back to the user’s terminal. The whole operation should take a fraction of a second. People do not want to wait a while to get their responses. With faster response time the numbers of transactions that can be served are more, and the efficacy of the system depends on how many transactions are served per unit time. The whole algorithm are now used by front end application languages like Java (with J2EE) framework and with .NET technology of Microsoft. These languages connect to the database mainly through ODBC and JDBC protocols. These protocols do impose a limitation in the data transfer volume. This has influenced the designers to use native database communication. Native database communications are specified protocols that database vendors expose to the external applications for data transfer.

In lieu of Conclusion

After the EDW, there should be some sort of analytic tools that analyse the data, in terms of aggregations and then show the result pictorially. These are called Data Analytics tools. Almost every database vendor has their proprietary analytic tools. The analytic tools then feed these aggregation results either for exposure through reports and externalization of dashboards or they can feed data into further applications for very advanced statistical and genetic programming tools so that one can find out the trends of data sets, and the innate rule in the data sets. This is what is called data mining of the advanced level where advanced mathematical formulae are used. Predictive analytics and Descriptive analytics are now being used to fetch hidden relations from huge data gram, helping analysts figure out where they are going. Neural network algorithms and genetic programming are now extensively used to find the hidden functional representation(Miller, 2013). Even in terms of dash-boarding advanced techniques are used with Flash to draw out dynamic graph and running graphs in 3 dimensional and rotating representations. The general health of a health care system changes with every transaction and those changes need to be continuously depicted on a running basis. The present technology thus allows dealing with dynamical changes. Dynamical representation thus helps analysts take instant decision very with agility and immediately at the point of occurrence.

Analyse and Design a System

The purpose of the project is to develop a travel portal that will fetch information from other hotel sites that are present in Philippines. With the influx of traveller in the country, the rise of hotel and tourism is rampant and visitors often looks for a specific website that caters online booking facilities to most of the registered hotels.
Based upon the aforesaid situation, the idea to build a portal allowing online booking and updated information has been considered. Most of the registered hotels do have their website but at times they lack online booking procedure. The proposed website will help user to find list of available hotels and can search based upon the range of budget, date and availability. Once user finds the detail they can opt for online booking using integrated payment gateway and receive online confirmation via email. Moreover the site will also enable user to cancel reservations in case of any events or unforeseen circumstances that may arise to the traveller.
The website will be designed with a simple look offering maximum functional modules to enable a user-friendly application. The site will also be designed to make it compatible with mobile system. User can review the site on any handheld device besides the web browsers.

Audience

a.       End-user / Customer
b.      Hotel owner
c.       Administrator

Functional Detail

End-User/Customer

Following are the functional detail that one viewer finds when they enter the site.
Module Name
Details
Search
The search function will let user find the hotels that one visitor is looking for. In order to perform the same, following are the attributes that one has to enter as stated below:

·         Select location in Philippines
·         Select check-in date
·         Select check-out date
·         Select number of nights
·         Select number of nights
·         Select members type (Adult / Children)

Once user fills the information and clicks the ‘Search’, it will list all the hotels within the parameters as entered by a visitor.
View hot deals
The home page will also list new deals from hotels one user can avail. In order to do the same, the visitor simply has to enter promotional code to avail an offer.
View hotels
This section in the page will list some of the popular hotels. The rating is calculated based upon the feedback received from visitors who availed the hotels during their stay in Philippines.
Contact Helpdesk
User can contact support section to connect with the customer support team at any point of time.

Hotel Owner Module

This said module is mainly for the hotel owner who would like to advertise and list their services with the proposed website. Following are the functional modules that we propose for the said user-type:
Functional Module
Details
Contact for submission
A hotel owner in order to be a member of the site has to first intimate the webmaster. In order to do the same, they have to fill up the form with the following details as mentioned below:

·         City
·         Hotel Name
·         First Name / Last Name
·         Contact No
·         Email ID
·         Confirm Email ID

Once the aforesaid information is provided the webmaster will receive notification after which the company will take necessary actions to register their hotel with the site and enable listing.

Administrative Module

The admin module is mainly developed for super-user where one can manage the site along with end-users / hotel owners besides other functional areas. Following are the some of the basic functionalities that we aim to propose in the section:
·         Login for admin
·         Update account (incl. password update)
·         Assign sub-admin with specific task using Task Manager
·         View visitors list
·         View list of new user that made online booking
·         View number of online booking (Report format: Daily / Weekly / Monthly)
·         View transaction details
·         Refund management
·         View hotel owner’s information
·         View number of booking for a hotel
·         Payment gateway management
·         Manage payment transaction with hotel owner
·         Manage API

Use of API

API (Application Programming Interface) is one of the most common and acceptable methods in order to fetch information from other websites. Here in this case, once a hotel owner registers and gets approved for listing, the hotel owner’s website database will be connected with the API to fetch real-time data. Once the data is achieved and stored in the main database, they are organized and presented to the user in a friendly format.

Technologies

The proposed website will be developed with the following technologies:
·         ASP.NET Framework will be used to develop the web portal (It is envisioned to design a new system by revamping the existing website)
·         MS-SQL Server will be used as database.
·         The system will use XML-Data Feed Method to fetch data from other website. In short, an API will be developed to do the same.
·         The website will be designed using HTML 5 to develop a web and mobile friendly website.
·         SSL will be used to implement security in case of online transactions.
·         The payment gateway to use would be online merchant account for credit card transaction.

Reference

Booch, G., Maksimchuk, R., Engle, M., Young, B., Conallen, J., & Houston, K. (2007). Object-Oriented Analysis and Design with Applications. New York: Addison-Wesley Professional.
Chamberlin, D. (1976). Relational Data-Base Management Systems . Computing Surveys, 43-66.
Date, C. (2003). An Introduction to Database Systems. New York: Addison-Wesley.
Dietrich, S., & Urban, S. (2011). Fundamentals of Object Databases: Object-Oriented and Object-Relational Design. Morgan & Claypool Publishers.
Fry, J., & Sibley, E. (1976). Evolution of Data-Base Management Systems. Computing Surveys, 7-42.
Haigh, T. (2006). “A Veritable Bucket of Facts” Origins of the Data Base Management System. SIGMOD Record, 33-49.
Harrington, J. (1999). Object-Oriented Database Design Clearly Explained. Morgan Kaufmann.
Laberge, R. (2011). The Data Warehouse Mentor: Practical Data Warehouse and Business Intelligence Insights. New York: McGraw-Hill Osborne Media.
Miller, T. (2013). Modeling Techniques in Predictive Analytics: Business Problems and Solutions with R. FT Press.
Selinger, P., Astrahan, M., Chamberlin, D., Lorie, R., & Price, T. (1979). Access Path Selection in a Relational Database Management System. IBM Research Division.
Stonebraker, M. (1981). Operating system support for database management. Communications of the ACM, 412-418.



www.sampleassignment.com


About Us

Sample Assignment has been serving students across the globe for over a decade now, not only with academic assignments, also assisting the students excel in their academics.
Being the fastest growing online assignment writing service providers globally, We help you with all kinds of homework, coursework, proposals, dissertations, thesis, essays, reviews, projects, presentations, book reports, case studies, article writing etc. Our experienced researchers are from varied academic backgrounds and cover various topics from all fields of study. We have space for everyone – from a student struggling in K- 12 to a PhD research scholar. With immense exposure in different universities and colleges around the world, our team understands the explicit academic culture and the demands placed by the teachers and professors on the students. Once you submit your assignment, be rest assured that we would follow the stated guidelines and specifications, delivering the right solutions straight to your inbox in time. Our high quality customized assignment help is appreciated and is much sought after, by the student community the world over.
We all understand how tough a student’s life can get. Being a student you need more than 36 hours in a day to be able to do all that you need to do. This is where we pitch in. Our guidance and timely assistance can help you cope with the extreme pressures of academia. When you are struggling and have no time and energy to compete and survive that is where we help you acquire better grades, making you feel stress free and relaxed.
Our encouraging experts would not only provide you with the online assignment help but would also guide you through the confusing maze in which you find yourself at times. Needless to say, our work is plagiarism free. Be it homework assignment help for school students, coursework, assignment writing help for graduates, project writing assistance for research level students needed for their challenging thesis and dissertations, trust us - we are there for you.

Mission

The objective of Sample Assignment is to create viable opportunities for the Students by helping them out in their academics. Our model assignments and papers can be used by the Students in their research work and also for gaining exposure in their relevant fields. We work closely with our clients in order to ensure 100% satisfaction every single time our services are used.
Call us at +61 401 358 795 or mail us at: info@sampleassignment.com
You can also submit your assignments through the order form here.
Our services are available 24 x7. It is very easy to connect to us. If you still have questions, visit our FAQ page. 

No comments:

Post a Comment