Friday, November 29, 2019

Jack Worthing, as the Image of Englishman of Victorian Epoch, in The Importance of Being Earnest by Oscar Wilde

Introduction The Importance of Being Earnest is the famous play by Oscar Wilde, an outstanding English play-writer. The play was highly appreciated by the public and reviewers. Due to its involved plot and a good sense of humor, the play continues to attract a numerous audience at the theatres all over the world.Advertising We will write a custom essay sample on Jack Worthing, as the Image of Englishman of Victorian Epoch, in The Importance of Being Earnest by Oscar Wilde specifically for you for only $16.05 $11/page Learn More The aim of this essay is to analyze Jack Worthing, one of the main characters of the play, which the author represented as the typical wealthy Englishman of Victorian epoch. The Fabricated Story John Worthing is the young wealthy gentleman, the owner of a big country mansion. We first meet Jack, as his friends call him, at the London apartment of Algernon Moncrieff, Jack’s best friend. Both friends tired of their ordinary l ife and created the stories about themselves. Algernon says to Jack, â€Å"You have invented a very useful younger brother called Ernest, in order that you may be able to come up to town as often as you like.   I have invented an invaluable permanent invalid called Bunbury, in order that I may be able to go down into the country whenever I choose† (Wilde n.pag.). Jack and Gwendolen At the time when the friends are talking how the things are and sneering at their life, Lady Braknell and Gwendolen, her beautiful daughter, enter the room. Jack and Gwendolen are in love with each other. However, she does not know his real story. Besides, she has always said that her dream is to marry a man named Ernest (Wilde n.pag.). Algernon says that the way Jack flirts with Gwendolen is â€Å"perfectly disgraceful and that  it is almost as bad as the way Gwendolen flirts with him† (Wilde n.pag.). However, the fabricated story prevents Jack’s intention to marry Gwendolen. Lad y Bracknell said that she will not give her daughter in marriage to someone without a family. Jack and Lady Bracknell Probably, the discussion of Lady Bracknell and Jack is one of the most interesting and amusing episodes in the play. From this discussion, we get to know that Jack is twenty-nine years old; he has â€Å"a country house with some land, attached to it, about fifteen hundred acres† and that he lost his parents and does not know a lot about them (Wilde n.pag.). Lady Bracknell also asked him about his political convictions.Advertising Looking for essay on british literature? Let's see if we can help you! Get your first paper with 15% OFF Learn More â€Å"Lady Bracknell. What are your politics? Jack.   Well, I am afraid I really have none.   I am a Liberal Unionist Lady Bracknell.   Oh, they count as Tories.   They dine with us.   Or come in the evening, at any rate.  Ã¢â‚¬ (Wilde n.pag.). At the end of the play, the true stories of Jack and Algernon uncover. Nevertheless, Gwendolen agree to marry Jack. Incidentally, it turns out that Jack’s real name is John Ernest. The name was given to him after his father who he had never known. The Main Traits of the Character Witness, respectability and quick mind are the brightest traits of Jack’s character. Despite the conventionalities, Jack shows his good sense of humor and independent outlook. Although he created the false story about himself, he is hardly described as a dishonest man. We know from his dialogue with Lady Bracknell that he does not hide his views, though they are not in his favor sometimes. The idea to create a fabricated story about his life rather shows his desire to take a rest from the conventionalities of the haut monde. Conclusion In order to sum up all above mentioned, it should be said that Jack Worthing is one of the main characters of The Importance of Being Earnest play by Oscar Wilde. He represents the typical Englishman of upper social class of the Victorian epoch; his witness and cleverness make him one of the brightest male characters in Oscar Wilde’s drama. The independence of views characterizes him as a man not used to the conventionalities of the his time. Works Cited Wilde, Oscar 2006, The Importance of Being Earnest. Web. http://www.gutenberg.org/ebooks/844?msg=welcome_strangerAdvertising We will write a custom essay sample on Jack Worthing, as the Image of Englishman of Victorian Epoch, in The Importance of Being Earnest by Oscar Wilde specifically for you for only $16.05 $11/page Learn More This essay on Jack Worthing, as the Image of Englishman of Victorian Epoch, in The Importance of Being Earnest by Oscar Wilde was written and submitted by user Brynn Hebert to help you with your own studies. You are free to use it for research and reference purposes in order to write your own paper; however, you must cite it accordingly. You can donate your paper here.

Monday, November 25, 2019

September 11th, 2001 essays

September 11th, 2001 essays September 11, 2001, was a day of change for all of us in America. Everyone has changed in some essential way, and it seemed as if nothing would ever be the same after the terrorist attacks on American soil. I have changed in several different ways since September 11, 2001. Since the terrorist attacks, I am very thankful I live in a free country. The attacks reminded me that America has the most individual rights and freedom. I have more feeling of love towards my country. I have more patriotism than ever before. September 11, 2001 has helped me to understand more about freedom and patriotism. America now knows that this is reality, and that we dont take our freedom for granted. As a country, we joined together and rose up when we faced the ultimate low of the tragedy of all the lost lives. We formed a more positive attitude towards our country, and we helped the families as they faced the tragedy of losing loved ones. The terrorists did not instill fear in me, but instead they instilled a deeper respect in me for our country. To show more patriotism, I asked my family to hang an American flag from our front porch. The American flag has been on display everyday since the terrorist attacks. The flag gives me a great feeling of American pride , and the flag never lets me forget the people that died to protect our country. Even thought the United States government had some clues that may have prevented this attack, I support the government more now. I strongly support President Bush on the ways he has handled the terrorists and what we have done to the terrorists in Afghanistan. Since the tragedy, I have realized that it has pulled us together as a country and we are willing to defend our freedom. I have great respect towards the military. I respect them because they are willing to protect our nation from future attacks, and they will try their best to make our country as safe as possible. I also have more respe...

Friday, November 22, 2019

Comparing Marx and Toqueville Research Paper Example | Topics and Well Written Essays - 1750 words

Comparing Marx and Toqueville - Research Paper Example Manifesto of the Communist Party In the beginning of the manifesto, Marx details the history and evolution of the means of production from feudal to modern times. As Europe expanded through colonization and exploitation of its colonies, the feudal system collapsed, and â€Å"the Manufacturing System took its place† (Marx, 1888, p8). The feudal lords fell to the industrial middle class. Technology and trade continued to expand exponentially, and the industrial middle class was replaced by â€Å"industrial millionaires, the leaders of whole industrial armies, the modern bourgeois†(Marx, 1888, p8). The bourgeois, or new ruling class, controlled the means of production and had all political power. The modern working class--the proletariat--†must sell themselves piecemeal, as a commodity, like every other article of commerce† (Marx, 1888, p14). Their labor was unskilled and grueling. They worked under the eyes of the bourgeois manufacturer for minimum wage in fac tories. With automation came the blurring of the distinction in who was needed to work. Women and children could do repetitive tasks as well: â€Å"All are instruments of labor, more or less expensive to use, according to their age and sex† (Marx, 1888, p15). The Manifesto contends that all history is a form of conflict between oppressor and oppressed. What changes are the names and ideologies of the oppressors, but the fundamental dynamic remains the same. At the end of the 19th century, globalization was occurring, with capitalism as its driving force. Marx and Engels were well aware of this propensity: â€Å"for exploitation, veiled by religious and political illusions, it [the bourgeoisie] has substituted naked, shameless, brutal exploitation† (1888,p 10). With this in mind, Marx turns his attention to the bourgeois idealization of the family. By making into law its values and expectations, the bourgeoisie legitimizes the system of production (1888,p 24). The famil y in its ideal form exists only for the wealthy. It claims to be an ideal form. Yet, the bourgeois man is the only real beneficiary. As the property owner, and as the owner of the means of production, the upper class man owns both his wife and his children. The right of ownership guarantees him freedom to determine his own destiny. As property, the wife and children, although materially provided-for, have no real freedom. The proletarian is unable to live even like this, because he, his wife and his children must work in order to survive: â€Å"The bourgeois claptrap about the family and education, about the hallowed co-relation of parent and child becomes all the more disgusting [because]†¦by the actions of modern industry, all family ties among the proletarians are torn asunder and their children transformed into†¦articles of commerce and instruments of labor† (Marx, 1888, p 25). Marx advocates replacing home education with social education, in order to make an ev en start for rich and poor children. This â€Å"even start† as I phrase it, is a bit utopian, but it is the goal of Marxism to overthrow the unfair, elitist results of the capitalist system. The notion of complete ownership of the means of production allows the elite man to possess indiscriminately. (1888, p 25). As all women [ as well as lower class men] are his property, then all can be used. The proletarian men are only instruments to be used in the amassing of capital. All women, rich, poor, married, unmarried and prostitutes,

Wednesday, November 20, 2019

E-Commerce Methods of Amazon Essay Example | Topics and Well Written Essays - 250 words - 12

E-Commerce Methods of Amazon - Essay Example Amazon is able to lower its costs because it can offer customers lower prices than competitors. Lower production costs result in savings for customers because Amazon is able to pass those on.  Despite selling products for very little profit, Amazon is able to gain traction in the market simply because of the sheer volume of orders that it processes every day. Because Amazon focused on selling its products online from the very beginning, it was able to become well-known as an online goods store.  With Internet purchases increasing year on year, Amazon is well-placed to dominate the market for years to come. New market entrants are simply unable to compete with the purchasing power that Amazon has through its vast number of customers. The disadvantage to Amazon only selling through the Internet is that if the Internet is affected in any way over the next few years, the business will feel the effects of that. This is why Amazon should consider diversifying its business model so that risk is lessened.

Monday, November 18, 2019

Marketing startegy Essay Example | Topics and Well Written Essays - 750 words

Marketing startegy - Essay Example Their costs for customer service remained at the same value. They retained the same amount of cash reserve in the end of this period one as period zero. They also did not tamper with any pricing strategy and kept a markup of 49.48% for channel 1 and 2.They kept the same number of sales representatives. They used to have equal distribution for both channels but this year they increased quantities of channel 1 and the weight of the units sold were heavier than channel 2. It must be noted that all products ordered in period zero were sold and disposed leaving no ending inventory for the firm but this was not the case in period 1. No changes in strategies but the firm did not deliver the same results. For Firm 2, the distribution intensity is largely found in channel 2 with a higher markup relative to channel 1 and even to period zero's data. They doubled their production by 37.5% but was not able to sell all. It could be seen that they incurred expenses for R&D for product modifications. Product features were improved.Also they increased their advertising as seen in their increase of 16% in expense. It should be noted that Firms 1,2 and 2 did not invest in Marketing Research Reports. They only had very minimal expenses for such and they were not competitive in this manner because they were not able to connect with the consumers and they were not equipped with what the market needs and demands. They could have increased a feature in their product and still not entice the consumers. Additional promotions and commission expense was incurred. they concentrated in improving customer service and incurring an additional $27,500 outflow. Firm 2 and 3 applied the same strategies. Lastly, Firm 4, the considered most successful implemented various strategies and optimized the changes. Product development is the root of the major edge and advantage of this firm against the 3 others. The values of product features were all improved. The highest average retail price of $ 279.70 for both channels was riveted by this firm.Unlike the previous year, they augmented the production by 5,000 units and still exceeded by 3,710 units leaving no inventories to be transferred. They incurred a net contribution.They had no remaining reserve funds though like the other 3 but increased their budget for the next period by 49.85%. More distribution intensity of sales for channel 1 was realized and that made up 80.4% of the total units sold. Competitive strategy of theirs was to focus on product development and market research. Promotions strategy wise, they diminished advertising expense and affected the pioneering type. Sales promotion bore more bearing this year and results prov e its effectivity relative to distribution channel 1 which obviously they concentrated much on. The results of firm 4 was the positive reverse of firms 2 and 3 and definitely of firm1. The market segmentation is constant with their product positioning strategy . 2.) From the perspective of Firm 2, competition might unfold in this industry and there is potential primarily because this period is considered a trial and error allowance for the firms who is deviating from their norms to explore the market. With this, there are major points and the first would relate to launching a new product. As period ended, only Firm 4 incurred net contribution. They all had the option to adapt a new product at the beginning but no one opted for such. In the coming of second

Saturday, November 16, 2019

Comparative Study of Advanced Classification Methods

Comparative Study of Advanced Classification Methods CHAPTER 7 TESTING AND RESULTS 7.0 Introduction to Software Testing Software testing is the process of executing a program or system with the intent of finding errors or termed as bugs or, it involves any activity aimed at evaluating an attribute or capability of programming system and determining that it meets its required results. Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally intractable and humans have only limited ability to manage complexity. It is also true that for any complex systems, design defects can never be completely ruled out. 7.2 Testing Process The basic goal of the software development process is to produce data that has no errors or very few errors. In an effort to detect errors soon after they are introduced, each phase ends with a verification activity such as review. However, most of these verification activities in the early phases of software development are based on human evaluation and cannot detect all errors. The testing process starts with a test plan. The test plan specifies all the test cases required. Then the test unit is executed with the test cases. Reports are produced and analyzed. When testing of some unit complete, these tested units can be combined with other untested modules to form new test units. Testing of any units involves the following: Plan test cases Execute test cases and Evaluate the result of the testing 7.3 Development of Test Cases A test case in software engineering is a set of conditions or variables under which a tester will determine whether an application or software system is correctly working or not. The mechanism for determining whether a software program or system has passed or failed such a test is known as a test oracle. Test Cases follow certain format, given as follows: Test case id: Every test case has an identifier uniquely associated with certain format. This id is used to track the test case in the system upon execution. Similar test case id is used in defining test script. Test case Description: Every test case has a description, which describes what functionality of software to be tested. Test Category: Test category defines business test case category like functional tests, negative test, accessibility test usually these are associated with test case id. Expected result and the actual result: These are implemented within respective API. As the testing is done for the web application, actual result will be available within the web page. Pass/fail: Result of the test case is either pass or fail. Validation occurs based on expected and actual result. If expected and actual results are same then test case passes or else failure occurs in test cases. 7.4 Testing of Application Software The various testing done on application software is as follows. Integration Testing 7.4.1 Integration Testing In this phase of software testing individual software modules are combined and tested as a group. The purpose of integration testing is to verify functional, performance and reliability requirements placed on major design items. These â€Å"design items†, i.e. assemblages (or unit group of units), are exercised through their interfaces using black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing. The overall idea is a â€Å"building block† approach, in which verified assemblages are added to a verified base which is then used to support the integration testing of further assemblages, In this approach, all or most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. Integration testing is a systematic technique for constructing the program structure while at the same time conducting test to uncover errors associated with interfacing. The objective is to take unit-tested modules and build a program structure that has been dictated by design. The top-down approach to integration testing requires the highest-level modules be tested and integrated first. This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers. The bottom-up approach requires the lowest-level units be tested and integrated first. These units are frequently referred to as utility modules. By using this approach, utility modules are tested early in the development process and the need for stubs is minimized. The third approach, sometimes referred to as the umbrella approach, requires testing along functional data and control-flow paths. First, the inputs for functions are integrated in the bottom-up pattern. 7.4.1.1 Test Cases for Support Vector Machine Support Vector Machine is tested for the attributes which fall only on positive side of hyperplane, attributes which fall only on negative side of hyperplane, attributes which fall on both positive and negative side of hyperplane and the attributes which fall on the hyperplane. The expected results match with the actual results. Table 7.1: Test Cases for Support Vector Machine 7.4.1.2 Test Cases for Naive Bayes Classifier Naive Bayes Classifier is tested for the attributes which belongs to only class ‘1’, attributes which belongs to only class ‘-1’, attributes which belongs to both class ‘1’ and class ‘-1’. The expected results match with the actual results. Table 7.2 Test Cases for Naive Bayes Classifier 7.5 Testing Results of Case Studies A particular example of something used or analyzed in order to depict a thesis or principle. It is a documented study of real life situation or of an imaginary scenario. 7.5.1 Problem Statement: Haberman Dataset Haberman data set contains cases from the University of Chicagos Billings Hospital on the survival of patients who had undergone surgery for breast cancer. The task is to determine if the patient survived 5 years or longer (positive) or if the patient died within 5 year (negative). @relation haberman @attribute Age integer [30, 83] @attribute Year integer [58, 69] @attribute Positive integer [0, 52] @attribute Survival {positive, negative} @inputs Age, Year, Positive @outputs Survival Training SetTest Set Weight vector and gamma w =0.09910.07750.2813 gamma = 0.3742 Predicted Class label of test set Confusion matrix of the classifier True Positive(TP)=8.000000False Negative(FN)=27.000000 False Positive(FP)=8.000000True Negative(TN)=110.000000 AUC of Classifier = 0.517792 Accuracy of classifier = 77.124183Error rate of classifier = 22.875817 F_score=31.372549Precision=50.0Recall=22.857143Specificity=93.220339 Confusion Matrix for SVM Fig 7.1: Bar chart of SVM for various Performance Metric Predicted Class Label of Naive Bayes Classifier True Positive(TP)=10.000000False Negative(FN)=25.000000 False Positive(FP)=11.000000True Negative(TN)=107.000000 AUC of Classifier = 0.5202 Accuracy of Classifier =76.4706Error Rate of Classifier = 23.5294 F_score=35.7143Precision=47.6191Recall=28.5714Specificity=90.678 Confusion Matrix for NBC Fig 7.2: Bar Chart of NBC for various Performance Metric Tab 7.3: Comparison of SVM and NBC for various Performance Metric Fig 7.3: Bar Chart for Comparison of SVM and NBC 7.5.2 Titanic Data set The titanic dataset gives the values of four attributes. The attributes are social class (first class, second class, third class, and crew member), age (adult or child), sex, and whether or not the person survived. @relation titanic @attribute Class real[-1.87,0.965] @attribute Age real[-0.228,4.38] @attribute Sex real[-1.92,0.521] @attribute Survived {-1.0,1.0} @inputs Class, Age, Sex @outputs Survived Training SetTest Set w = -0.10250.0431 -0.3983 gamma = 0.3141 Predicted Class label of test set confusion matrix of the classifier True Positive(TP)=154.000000False Negative(FN)=181.000000 False Positive(FP)=64.000000True Negative(TN)=701.000000 AUC of Classifier=0.426392 Accuracy of classifier in test set is=77.727273 Error rate of classifier in test set is=22.272727 F_score=55.696203precision=70.642202Recall=45.970149specificity=91.633987 Confusion Matrix for SVM Fig 7.4 Bar chart of SVM for various Performance Metric Predicted Class label of Naive Bayes Classifier True Positive(TP)=197.000000False Negative(FN)=138.000000 False Positive(FP)=148.000000True Negative(TN)=617.000000 AUC of Classifier = 0.4782 Accuracy of Classifier = 74Error Rate of Classifier = 26 F_Score = 57.9412Precision = 57.1015Recall = 58.806Specificity = 80.6536 Confusion Matrix for NBC Fig 7.5 Bar chart of NBC for various Performance Metric Tab 7.4: Comparison of SVM and NBC for various Performance Metric Fig 7.6 Bar Chart for Comparison of SVM and NBC Department of CSE, RNSIT2014-15Page 1

Wednesday, November 13, 2019

Health of Elderly Australia Essay -- essays research papers

How is Australia’s aging population supported by the Australian Health Care System? PREAMBLE Since 1901 Australia’s elderly population has had a dramatic rise with it estimated that 65-year olds make up just under 15% of Australia’s population (Northern Health Research). The median age of the country has risen from 22 to 35 years and people age 0-14 has decreased from 35.1% in 1901 to 20.7% in 2001 (Mayne Health Research). As this â€Å"greying of the nation† continues mirroring global trends, there has been an influx of residents admitted into aged care facilities around the country. The aim of this report is to perform a case study on an elderly member of the community cared for at the St. Paul’s Aged Care Centre in Caboolture, after visiting the facility every Friday for a period of five weeks. Not only will the residents health be investigated but also the effectiveness of the aged care centre to cater for the rights and needs of the residents in relation to the Ottawa Charter. Suggestions will be made on how the centre could be improved in th e future culminating in a detailed summary of the report’s findings. INTRODUCTION Upon commencing weekly visits, each group of students was assigned a particular resident and advised to monitor their health, behaviour and needs. After being assigned an elderly gentleman by the name of Ken, it quickly became evident the reasoning behind his care. Ken was suffering the early stages of dementia often forgetting names and having short-term memory loss. According to the Alzheimer’s Association in Queensland, early stages of dementia involves the destruction of brain cells in isolated areas often with first signs being short-term memory loss. He was also suffering severe arthritis of the left knee that seemed to be afflicting each of his elbows as well. Arthritis is a term loosely applied to inflammatory, metabolic, or degenerative diseases involving one or more of the joints (Collier’s Encyclopedia). It is a prevalent, crippling disease affecting tens of millions worldwide (www.arthritis.org/). The final health concern noticed was Ken’s social unac ceptance. Often reluctant to join in with conversations and games, Ken seemed very isolated and lonely, probably further reinforcing the signs of dementia setting in. EXECUTIVE SUMMARY On completion of the weekly visits and looking back on the time spent with the residents, ... ... the facility encouraging a more sociable and homelike setting. This would further enhance resident’s social wellbeing allowing a friendlier environment to be created and according to Dr. Luke Ryse,   Ã‚  Ã‚  Ã‚  Ã‚  Ã¢â‚¬Å"A person who is living a life in a favoured setting is less likely to   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   suffer stress, depression and anxiety often associated with aged  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   care facilities.† (www.agecare.com/oz/st/) Another improvement would be to give greater freedom to the residents. Maybe have meals at different times in the evenings giving them a sense of control and also allow residents accompanied visits outside the facility on a weekly basis. Both would lift spirits in centre and are improvements that are easily attainable. They allow residents greater independence and as mentioned earlier, this fosters better health. However apart from those two areas, the St. Paul’s Aged Care Facility in Caboolture is effectively caring and adhering to the needs of the elderly in conjunction with promoting health through the implementation of the principles of the Ottawa Charter.

Monday, November 11, 2019

Dell vs. Hp Performance & Finanical Analysis

Financial Analysis Common-Size Analysis Common-Size Income Statement Analysis The common-size income statement for Dell shows a relatively flat history for cost of goods sold compared to sales from 82. 27% in 2006 to 82. 49% in 2010. Dell’s five year average for cost of goods sold to sales was 82. 23%, which is bit higher than HP cost of goods sold to sales five year average of 75. 96%. This in turn gives HP higher gross revenue than Dell most likely through means of obtaining raw materials and goods at lower costs, giving HP greater ability for an increased profit margin.This increased profit margin can allow for HP to offer more discounts then Dell may be able to afford, or increase spending in areas of investment for the company. Another area of interest within the common size income statement is related to selling, general and administrative to sales. Overall through the years 2006 to 2010, Dell saw an increase in this area growing from 9. 05% in 2006 to 12. 22% in 2010. M eanwhile, HP experienced the exact opposite effect, with this category declining from 12. 29% in 2006 to 9. 99% in 2010. According to Dell’s annual report, the major increase was due to the acquisition of Perot Systems.It also appears that over the last five years, Dell’s strategy of products directly to customers has been adopted by many competitors, allowing the competitors to decrease some of their overhead and commissions paid to retailers, all the while increasing sales. In the same time span as competitors partially adopted the strategy that made Dell prominent, Dell began to place more products in retail stores to compete directly on the front lines with its competition, as mentioned in their Management’s Discussion and Financial Analysis meetings.This approach FINANCIAL ANALYSIS OF DELL AND HP has caused a good percentage of the sales revenue to go to retailers and distributors, thus straining the ability to maximize net income for the present. Research, development and engineering for Dell as a percentage to sales were 0. 82% in 2006 and slightly grew to 1. 18% in 2010. HP research, development and engineering to sales is roughly 3 times the amount that Dell dedicated; however, HP has drawdown their research, development and engineering to sales from 3. 92% in 2006 to 2. 35% in 2010.The five year average in this category for Dell was 0. 99% and HP was 3. 04%. Even with HP’s much higher research, development and engineering to sales percentage than Dell, HP has a higher operating expense, but since their cost of goods sold to sales is lower, it gives HP the edge in producing a higher operating income than Dell. Overall net income to sales decreased for Dell throughout 2006 to 2010, with a major decrease happening in 2010 and overall having a five year average of 4. 51%. In 2006 the net income to sales was 6. 46%, then in 2009 it dropped to 4. 6%, but in 2010 was when the major drop happened, resulting in net income being jus t 2. 71%. The main contributor to the drop in net income to sales was from operating expenses, with one component being the increase in research, development and engineering, but the primary increase coming from the selling, general and administrative category. Increased operating expenses are reflective of Dell’s push of broadly branching out into the retail market. HP’s net income to sales remained flat during the same time span, with a five year average of 6. 88%.The basically net zero increase in net income can be attributed to the economic downturn, and its rippling effect on customers. Common-Size Balance Sheet Analysis The common-size balance sheet of Dell reflects a current assets to total assets five year average of 74. 91% and shows a short term liabilities to total liabilities and shareholders’ equity five year average of 63. 72% covering years 2006 to 2010. Dell’s current assets and current liabilities both decreased from 2006 to 2010, but the ir current liabilities decreased at a faster rate than their current assets did.The gap between the two in 2006 was roughly 7% and had increased to 16% by 2010, providing plenty of opportunity to grow and develop the company further in their plans. HP common size balance sheet represents a different story. Their a current assets to total assets five year average was 49. 45% and short term liabilities to total liabilities and shareholders’ equity five year average was 42. 37% across years 2006 to 2010. Both accounts FINANCIAL ANALYSIS OF DELL AND HP 7 decreased slightly over the years, and by 2010, HP had a gap of current assets to current liabilities of only 4%.Potential investors will focus on this close margin because HP may start to become too heavily leveraged, which could hinder their ability to expand. It could also pose the problem of decreasing the percentage amount that HP reinvests back into the company, due to using assets to pay off short term liabilities. Within Dell’s current assets, short term investments to total assets decreased from 8. 67% in 2006 to 1. 11% in 2010. Many of these short term investments had matured and were sold. The additional cash on hand helped decrease accounts payable, which decreased from 42. 4% in 2006 to 33. 80% in 2010. Reducing its liabilities strengthens Dell financial health, yet further liquidity and asset utilization ratio test should be conducted to determine if their more solid financial standing is long term or simple a one year over year change. Dell’s inventory to total assets remained mainly the same over the five year span with 2. 53% in 2006 and 3. 12% in 2010. This is a reflection Dell’s strategy of keeping on hand inventory levels low and only producing the amount able to quickly sell. HP inventory to total assets changed substantially from 9. 5% in 2006 to 5. 19% in 2010. The drop in inventory percentage to total assets is a representation of HP improved strategy to minimize holding periods by taking delivery of inventory and manufacturing immediately prior to sale or distribution of product to customers. It is also reflective of the aggressive discounting that HP conducted as a result of the economic downturn. Dell’s long term debt to total liabilities and shareholders’ equity increased substantially from 2. 69% in 2006 to 10. 15% in 2010 with average long term debt of 4. 71%.The major increased indicates that the company was dependant on long term debt to finance its acquisition of Perot Systems in 2010. HP long term debt to total liabilities and shareholders’ equity followed the same path by increasing from 3. 04% in 2006 to 12. 26% in 2010. This increased in total debt is explained in their annual report as being spending on acquisitions and share repurchases. Debt to equity ratios are needed to be further evaluated to determine the risk factor for this increased level of liabilities. Comparative Analysis Comparative Income Sta tement AnalysisDell’s net revenue sharply declined from 2008 to 2010, going from 6. 47% to (13. 42%), as a result of the economic downturn, as individual customers put off luxury purchases such as computers and commercial customers put off bulk computer orders for a later to be determined FINANCIAL ANALYSIS OF DELL AND HP 8 date. On average, the net revenue growth was 1. 86% while cost of goods sold was 2. 05%. Cost of goods sold increased faster than sales, lowering its potential gross profit. Even though selling, general, and administrative was reduced substantially from 2008 level of 26. 3% down to (8. 97%) in 2010, its growth rate averaged 9. 45%, which outpaced net revenue on average. The drop in selling general and administrative was due to decreases in compensation, advertising expenses and improved controls during the downturn. The growth rate of cost of goods coupled with the economic downturn, found Dell with a (31. 91%) operating income for year 2010. A large decre ase in the market yield of over 200 basis points from 2009 was the cause for the (210. 45%) for investments and other income n 2010. Net income average was (10. 8%) over years 2006 to 2010, with major causes for this being lower sales due to economic downturn, decreases in investments, increases in tax liabilities and higher cost of a hedging program. Much like with Dell, the economic fallout had its effects on HP. Their net revenue severely decreased from 13. 50% in 2008 to (3. 22%) in 2009. The dollar depreciation to the euro played a large part in this drop for its European sales. However, unlike Dell, HP rebounded in 2010, increasing sales up to 10. 02%, which can be attributed mostly in part to HP’s acquisition of EDS. HP’s annual cost of goods averaged 7. 4%, which was lower than their net revenue average of 7. 96%. This led to a more favorable net income on average, indicating HP’s ability to better control its operating income through successful marketin g or more effective investment approaches over the years. Comparative Balance Sheet Analysis Dell’s five year average total current assets growth rate was 7. 75%, which was higher by a slim margin over average total current liabilities of 7. 27%. The relationship was consistent with the common size analysis giving support to Dell’s capability to cover short term liabilities with current assets.However, caution should be raised and solvency ratios further investigated as Dell’s current assets dipped below its current liabilities in 2010 by a comparison of 20. 32% to 27. 60%. Its competitor HP current liabilities growth rate average is out pacing its current assets growth by almost double with rates of 10. 88% to 4. 68%, respectively. This should bring caution to HP to get control of its short term liabilities growth rate, but not be too alarming, considering that by its common-size comparison, the company presently has enough current assets to pay for its short t erm liabilities.FINANCIAL ANALYSIS OF DELL AND HP 9 Dell’s accounts receivable rate of growth was 11. 90% on average, growing faster than the company’s average sales rate, 1. 86%. This relates to the increase in the collection period in days also increasing over this five year span. The category of property, plant and equipment grew for Dell at an annual rate of 6. 12%, with the majority of this growth happening in years 2006-2008. Plant, property and equipment declined in years 2009-2010, (14. 66%) and (4. 2%) respectively, which coincides with the company’s declining sales growth over these same years. On average, Dell’s total liabilities grew 11. 36% annually, compared to its total liabilities and shareholders’ equity growth rate average of 8. 21%. This highlights the company’s candidacy for potentially becoming a long-term solvency risk. Financial Ratio Analysis Liquidity Current Ratio and Acid Test Ratio Average current ratio for Dell w as 1. 19 and the acid test ratio was 1. 14. These averages are better in comparison to HP’s current ratio of 1. 17 and acid test ratio of 1. 0, which tells that Dell has more current assets to cover its short term liabilities and makes Dell a safer and more financially strong company. HP had a risky year in 2008 when its current ratio fell below 1. 00, ending at 0. 98, but shouldn’t be focused on too much considering that their net revenue in sales averages 7. 96% growth rate and is averaging a 39. 33% net income growth rate. Collection Period Dell’s ability to collect customers payments on accounts receivable is stronger than HP’s, with Dell taking 32. 04 days on average compared to HP’s 49. 74 days.While both companies collection period was longer than the normal business benchmark of 30 days, Dell was much more successful in collection from its customers and thus reduced the liability for risky accounts receivable. The shorter period for collect ion also enables Dell to pay for its inventory and not have to expose them to greater amounts of short term debt through increased working capital financing. Days to Sell Inventory Dell inventory holding period was much shorter than HP, with Dell having days to sell inventory ratio of 6. 70 on average and HP having an average ratio of 32. 2. Dell operates in a FINANCIAL ANALYSIS OF DELL AND HP 10 slightly leaner production manner than HP and is able to quickly move inventory through its distribution networks. The quicker a company is able to sell its inventories, the quicker the clock begins to receive payment to be able to pay back money owed on inventories acquired and sold, and not have to increase your working capital financing. Capital Structure and Solvency Debts to Equity Ratios Dell’s five year average of total debt to equity was 5. 23, compared to HP lower average ratio of 1. 5. This shows that Dell had more debt (creditors) financing than equity (shareholders) finan cing. Long term debt for to equity on average for Dell was 0. 29 and HP was 0. 22. While many feel that debt from creditors is more harmful because of the interest paid on the principle borrowed, the advantage here is that once the creditor is paid back, they are gone and off the payroll. Whereas equity financing involves more shareholders owning parts of the company, which reduces the dividend payout per shareholder as well as waters down earnings per share.Dells approach to being more heavily financed through debt than equity may be in an attempt to keep earnings per share at an increased level. Return on Investment Return on Assets and Return on Common Equity An important ratio is the return on assets ratio for its ability to measure earnings per dollar from its assets. The five year average for return on assets of Dell was 13. 06% while HP’s was 9. 07%. This higher percentage for Dell reflects a more efficient use of its assets and higher earnings from products sold per c ompany asset.Both companies have strong return on assets that goes to show the loyal base of customers each brand name of the two companies has. Return on common equity is another important profitability ratio. This ratio measures the earnings success of its capital investments through common shareholders. The return on equity for Dell averaged 81. 46% while HP averaged 23. 91. An observation of this profitability measure shows that Dell is possibly much more attractive for potential investors for its ability to effectively manage and use funds generated through shareholders equity.Operating Performance Profit Margin Ratios Dell’s gross profit margin average of 17. 77% was lower than HP’s average of 24. 04% HP controls a larger portion of the computer market as represented through this ratio. Dell also FINANCIAL ANALYSIS OF DELL AND HP 11 posted lower operating profit margins and pretax profit margin compared to HP. Dell’s higher selling, general and administrat ive expenses are cause for lower operating and pretax profit margins, partly due to new retail and certain global distribution relationships.As expected from the precursors above, net income was also lower for Dell when compared to HP. Dell needs to encroach more forcefully into HP’s large market share to positively influence its sales. Operating expense components should be addressed as well to find cost savings measures to increase operation income in order to ultimately increase its net income. Asset Utilization Cash Turnover The measure of how efficient a company utilizes its cash and cash equivalents to create sales revenue is depicted with the cash turnover ratio. In respect to this ratio, Dell averaged 5. 0, while HP averaged 7. 09. This showed that HP used its cash and cash equivalents more efficiently to build revenue. On the other hand, it shows that HP used its cash and cash equivalents while Dell refrained from using its cash and cash equivalents, as evident in th e common size analysis, showing that Dell retained on average 31. 77% of cash and cash equivalents to assets while HP averaged 12. 41%. Inventory Turnover Inventory turnover represents how fast companies turn their inventories into sales revenue. Dell had a much slower inventory turnover on average, 58. 8, than HP’s 11. 86. Over the past five years more companies have became better at the Dell model of sales direct to customers which has overall effected Dell’s sales as evident in the comparative analysis showing on average Dell grew sales by 1. 86% while HP grew at 7. 96%. Also, HP has become more efficient in their inventory distribution cycle and the amount of inventories held in relation to total assets, dropping from 9. 45% in 2006 to 5. 19 by 2010. Dell’s turnover ratio was directly affected by its increase in inventory to total assets growing from 2. 53% in 2006 to 3. 2 % by 2010. The increase in Dell’s inventories to total assets percentage couple d with declining sales growth over the past five years was a cause for their much higher inventory turnover rate. Total Assets Turnover Total assets turnover measures how efficiently a company utilizes total assets to create sales revenue. On average, Dell’s ability to generate more profit from its assets was roughly FINANCIAL ANALYSIS OF DELL AND HP 12 double that of HP, being 2. 15 to 1. 07 respectively. This shows that for overall assets held, Dell had a better record of generating sales.Market Measures Price to Earnings Ratio and Earnings Yield The price to earnings for Dell on average was 16. 35, lower than HP’s 18. 52. From this statistical ratio, HP is able to show that its investors have higher expectations of their company performance by being committed to paying a higher price per share to own HP stock over the past five year time span. However, with Dell showing better results when it came to liquidation and return on investment, they are able to portray to potential investors that they are the better buy at a lower price per share when compared to HP.Earnings yield represents the amount of earnings generated for every dollar invested. Here, Dell has a better showing on average with 7. 02% compared to HP’s 6. 25%. This ratio can be another point of persuasion that Dell is the better buy for it being properly priced when talking of earnings yield over the years 2006 to 2010. Summary of Financial Performance and Suggestions for Improvement Both Dell and HP have the financial statistics showing why they are strong competitors in an ever evolving industry.In an industry that attracts potential customers by offering the latest, fastest and greatest products, Dell needs an increase their amount of research, development, and engineering to sales percentage. Dell can no longer rely on just offering cheaper products because offering the newest technology and quality of product has moved to the forefront of consumers’ minds. It wou ld be wise for Dell to focus on precise areas where they have a strong competency and not try to be all things to everyone. One area they may rethink of pushing into is their expanded exposure into retail stores.Considering that Dell is fairly new to the retailing segment, their ties to the retailing market are not as strong as many of its competitors who have long withstanding relationships with retailers. These long withstanding relationships with retailers give companies like HP an advantage over new comers to retail stores, such as Dell, and possible over the next year or so, Dell should rethink this new part of their strategy. At the moment, the amount of increased funds used on selling, general and administrative has not equally translated into higher sales revenue.

Saturday, November 9, 2019

MBA Interactive Project Essay

Introduction Continuing the work and analysis begun in the first three SLPs, we again project ourselves back in time to the year 2012. I am in responsible for decisions on product development and pricing for the next four years for our line of tablets. I will show the score, financials and market data at the end of the four year period from my previous time discussions. Finally we can make a detailed discussion and analysis of the data using CVP analysis, and will explain why I recommend specific pricing and research and development (R&D) costs for the next four year period. Discussion The Clipboard Tablet Company is currently making three different tablet models; the X5, X6 and X7. The X5 has been on the market for three years already and market research has determined that consumers are not very worried about performance for this older tablet. The middle tablet, the X6, has been on the market for two years and market research shows consumers are concerned about performance but not necessarily price. The final tablet, the X7, is the newest and has only been on the market for one year, and market research shows the consumer is interested in both performance and price. With this in mind, we can analyze how the products evolved when Mr. Shmoe was in charge. The following table depicts the price and R&D percentage for each tablet over the preceding four years (since the last run) and whether or not the particular tablet production was discontinued or not. The graphs also depict the revenue generated and profit from the different tablets over the time period I was in charge of making the decisions instead of Mr. Shmoe. Review Overall the results of the third run of the tablet simulation had an end result which was approximately $142 million greater than the previous run, which was accomplished using CVP analysis. This change was due to two reasons. The first was the increased sales and revenue generated by the X6 tablet, ultimately reaching market saturation. The second reason was due to the dramatically increased sales of the X7. As the graphs display, the X6 accelerated greatly in terms of revenue and profit through 2013 and then began a steady and definite decline once reaching market saturation. Revenue and profit for the X7 were drastically different as compared to previous simulations, beginning to increase in 2014 and 2015,and setting the stage for sustained revenue and profits in the future time period. The X5 was relatively unchanged from previous simulations since I left the pricing alone due to the tablet having been on the market for several years already. Data Discussion It will also help to discuss in more detail what happened in the third simulation under my supervision while using the CVP model. For the X5, initial R&D allocation of the $24 million available was only 5%, or $1.2 million, plus the $75 million in other fixed costs gave a total fixed cost of $76.2 million. The variable cost per unit for the X5 amounts to $150, and using a price of $300 per tablet, the breakeven point for the X5 is 508,000 units sold. A price of $300 per tablet yielded a profit of $119 million. Fixed costs for the X5 are extremely high and with the age of the X,5 little R&D dollars were allocated in order to keep the total fixed costs down. Next up, the X6’s fixed costs were $48.3 million including the R&D costs, while the variable cost of the X6 came out to $275 per tablet. The breakeven volume for the X6 priced at $375 per tablet comes out to 375,000 tablets. The idea here was to achieve market saturation as quickly as possible and reap the associated profit. Based on the life cycle of the X6, the price was increased by only five dollars per year and associated R&D expenses were reduced only 5% in the latter years. Finally, the X7 has the exact same fixed cost as the X6 with the only difference being the dollars allocated for the R&D, which for the second run of the simulation was $49.5 million. However, the variable costs for the X7 are extremely low at only $55 per tablet. The breakeven volume for the X7 at $120 per tablet comes out to just under 577,000 tablets. The strategy here was to have a much lower initial price in an attempt to capture market share and volume upfront which would ultimately reap large profits after the breakeven point. Formulating a revised strategy Considering all of this information, the revised strategy will be somewhat similar from the previous one in regards to the overall conceptual plan. My previous strategy focused on CVP analysis, while the newer revised strategy for the tablet simulation will attempt to tweak and optimize this strategy further. Due to the lower breakeven prices of each of the tablets, volume for each one can be increased immediately by a reduction in price. Therefore we will attempt this for the X6 and X7 tablets. CVP does not account for product lifecycle, however, which is why my strategy is to more or less leave the pricing the same for the X5 as the previous run. R&D for the X5 will never increase above 1% since the consumers don’t care for this feature, while R&D will be the highest for the X7, which is marketed as the primary benefit of this higher-performance tablet. Prices for the X6 and X7 will increase by $10/year while maintaining roughly a 40-60 R&D split respectively, with the beginning price of the X7 starting $10 lower. This strategy should show higher profits after four years by keeping prices closer to their breakeven CVP pricing and varying slightly the R&D costs based on changing market saturation. To sum up the strategy, it is to more or less leave the X5 and X6 fairly constant from my previous simulation but attempt to increase overall sales in the X7 market, thus creating more revenue and profit. The following table depicts the results of the updated strategy for the next four years. As you can see we left the initial pricing for the X6 the same, starting out with a price of $375. Conclusion In conclusion, we generated a revised strategy for the Clipboard Tablet Company based on a revised and optimized CVP analysis. By adjusting the pricing for the X7 slightly downward in order to increase sales and revenue/profit, we maximize the outcome. The goal is to continue reap the profits out of the X5, get maximum revenue off of the X6 by achieving market saturation and to dramatically increase sales of the X7 which is the future for our company. I look forward to putting this into practice. References Forio.com, 2012. â€Å"Introduction.† PDA Simulator. Retrieved 09 Jan 2013 from: http://forio.com/simulate/jelson/tablet-development-sim-1/simulation/#p=page1 Manoski, Paula, 2002. â€Å"The R&D strategy/strategic process Part 1: a road map to R&D effectiveness.† Allbusiness.com, Retrieved 09 Jan 2013from: http://www.allbusiness.com/sales/customer-service-product-knowledge/122484-1.html QuickMBA.com, 2010. â€Å"Product Lifecycle.† Retrieved 13Jan 2013 from: http://www.quickmba.com/marketing/product/lifecycle

Wednesday, November 6, 2019

The History of Relational Database Technology

The History of Relational Database Technology Introduction In recent times the use of object-oriented designs in manufacture of software has skyrocketed. This has led software engineers to think of ways of coming up with database systems that are object oriented since they are much capable of meeting market needs. At the moment, there is no standardized language that can be used to program relational database systems. The field of relational database is still evolving and stakeholders hope to formalize some standards for object oriented database systems.Advertising We will write a custom research paper sample on The History of Relational Database Technology specifically for you for only $16.05 $11/page Learn More To maximize utility of relational database systems, concerted efforts must be initiated and which should aim at containing the shortcomings associated with the current technology. A historical analysis of evolution of relational database technology will help us to understand how object oriented database systems can be implemented with the aim of eliminating the aforementioned shortcomings. Relational database system is defined as a database that allows any data visible to the user to be organized in form of tables that allow all operations on them to be possible (Chamberlin, 1990).Database refers to collectively to data or information organized and stored in a manner than allows for quick access to enhance usability. Between 1950 and 1960, a system called database management system was invented which provided necessary functionality for maintenance, creation and modification of databases. This systems were however not efficient due to complexity associated with them. A client in database client/server architecture makes an application by requesting for data related services e.g. filtering or sorting from a server (Batory, 1998). The later is also known as the sequel/ SQL engine or database server in full. The clients request is granted by the SQL by returning a secured ac cess to data that is to be shared. SQL statements allow client applications to perform certain operations on a set of server database records such as retrieval and modification of data. The engine also allows for other operations to be performed on the data such as filtering of query result sets there by improving communication of saved data. There are various types of database management systems such as hierarchical databases, network databases and relational database models. The later had less advantages compared to the previous ones. This led into increased interest in how it worked. Relational database systems are unique in that data is organized in separate structures commonly known as tables which can be linked so as to enhance storage of data. This model was first discovered by Dr. Codd who’s aimed at was to eliminate shortcomings of the previous database management which was mainly huge amounts of information and complexity. Dr. Codd invented relational database manag ement model in 1970 at San Jose Research Laboratory. Sequel or structured query Language is one of the most renowned standardized languages for enhancing interaction with a database.Advertising Looking for research paper on programming? Let's see if we can help you! Get your first paper with 15% OFF Learn More History of SQL Dr. Codd and his colleagues had developed SQL or SEQUEL (Sequential English QUEry Language) as a data sublanguage for relational model at IBM San Josà © Research Laboratory in 1970. The language was originally put down in a series of papers from 1974. IBM used this language in a prototype relational system known as system R. which was developed in 70s (Codd, 1970). Other prototypes that were developed then include INGRESS and Test Vehicle developed by University of California Berkeley and IBM UK Scientific Laboratory respectively. The first relational database management to be released to the market came about when system R was refined a nd evaluated in 1981 to come up with fine product that was user friendly. Other DBMS (database management systems) developed using SQL included Oracle, IBM DB2 in 1970 and 1983 respectively. Other relational DBMS that were to later incorporate SQL into their system included but not limited to, MySQL,Paradox, FoxPro and hundreds of others (Codd, 1970). Dr. Codd’s 12 initial rules for relational database model have increased through time to total up to 333. SQL was endorsed as the standard language for relational databases by both International standards Organization and ANSI, the American National Standards Institute. Its use was formalized in 1986 and given the name SQL 1. Three years down the line, a publication of some trivial revision known as SQL-89 was made. However, during the year 1992, major revisions were done and endorsed by both ISO and ANSI. These revisions reduced the size of SQL and made it simpler to use. In 1999, another SQL standard known as SQL: 1999 was pub lished and endorsed by ANSI and ISO. This version which is currently in use had additional features such as user-defined data types and most importantly, it had object-oriented data management capabilities. It is common to find most dealers of relational database management systems implement their own extensions of the SQL to enhance functionality. Historical Background of Object Oriented Systems The need for advanced relational database technology that was easier to use made researchers consider the possibility of incorporating object oriented capability in DBMS. In 1980’s the disadvantages associated with relational database systems and the need to increase capacity of manageable objects led to the invention of commercial object-oriented database systems. Database systems have evolved overtime to allow for step by step incorporation of object oriented capabilities.Advertising We will write a custom research paper sample on The History of Relational Database Technolo gy specifically for you for only $16.05 $11/page Learn More The first object oriented language was Simula-67 in 1967. This was followed by Smalltalk. After this, the researchers saw it fit to come up with new languages by creating extensions of existing ones instead of coming up with a new language from scratch. Programming languages formed as a result of extension of LISP included LOOPS and Flavors (Codd, 1970). Extensions were made to C to come up with languages like C++ and objective C and so on and so forth. Similarly semantic data models for database systems like ER/ entity relationship, DAPLEX and SDM were developed (Batory, 1998). There are five generations of evolution of database technology. The first being files systems, followed by hierarchical database systems, then CODASYL database systems and the fourth one is the relational database systems. The fifth generation is still under development. The third and second generations had allowed remote users to access a central or integrated database. However, it was difficult to navigate through these systems and there was no data independence. This led to the rise of the next generation of database systems, the fourth generation. The four generations are designed for use in business applications such as accounts, sales inventory, purchases inventory, pay roll and hundreds of other data processing applications. The fifth generation database technology is expected to meet needs that go beyond business applications. The database systems have subsequently lain of some functionality from successive generations that caused users to suffer from fatigue due to repetitive nature of some functions. This has enhanced database systems by enabling programmers to carry out their duties with ease. This move was no without some shortcomings since performance of these systems was compromised. This made researchers to work extra hard in trying to make sure that the performance of next generation of databa se technology was at par. The use of declarative queries in relational database made it easier for programmers to retrieve information from the database. Performance was enhanced by introducing a new functionality, the query optimizer that determined the fastest method of retrieving record from the database. Concerted research efforts were focused on developing reliable relational database technology in 1970. This saw the introduction into the market of commercial relational database systems. Apparently, there were major shortcomings when it came to use of this technology in other applications.Advertising Looking for research paper on programming? Let's see if we can help you! Get your first paper with 15% OFF Learn More Researchers undertook to investigate these shortcomings in the 80’s. The affected applications included knowledge based systems i.e. expert system shells etc, CAD, CASE, multimedia systems etc (Batory, 1998). The main difficulty arises from the difference that exists between programming languages and database languages in that their data structures and data model vary to a wide degree. Evolution and History of System R. System R. was a prototype database system from where relational database technology was derived. This prototype proved that relational data model had various utility advantages that could be successfully realized in everyday use. The most important aspect of a computer is the ability to store and retrieve data. Modern DBMS offer the user with the very much needed independence through the use of an advanced user interface. This allows the user to deal with every aspect of information content rather than representatives of that information i.e. lists, pointers, bits etc. As stated earlier, the pioneer of relational data model was Dr. Codd in the early 70’s. According to Codd (1970) there are two ways through which conventional database systems store information: Through record contents in the database. Through the way in which these records are connected to each other. This is to show that different systems use things such as parents, links etc to connect among the various records. Codd observed that there were two important properties associated with relational database technology. First, data values represented all information and second the system is capable of supporting a very high level language. Through the later, the users were able to request for data without the unnecessary use algorithms. System R. was intended to accomplish seven goals. System R. has three phases namely, ‘zero’ phase which occurred from 1974 to 1975 and it involved the development of a user interface. The other phase ‘one’ occ urred from 1976 to 1977. This phase a fully functioning multiuser version of the prototype was designed. The final phase ‘two’ that occurred from 1978 to 1979 involved the evaluation of system R. after this, further experiments were carried out on the system but the system was not to be released to the market until much later. Of particular concern to our historical review is the optimizer that was built at phase ‘zero’ and final phase ‘two’ that involves the introduction of the concept of normalization. As previously discussed, optimizer facilitates navigation in a database system by minimizing the number of page fetches through the use of clustering property. This is possible because a clustering index enables all records with the same key to be placed on the same page. Phase ‘two’ took two years to be completed and it consisted of two main parts: San Jose experiments that were conducted on system R. Actual application of the sy stem at various IBM sites and selected client outlets. System R. was not to be used for any commercial purpose at this stage. This stage was to test the usability of the system on experimental basis only. The first experiment on usability of the product was carried out in 1997, June. All users who were involved in the experiment gave positive feedback. Some of the qualities whose efficiency was being investigated included ability to reconfigure the database as fast as possible, high level of user language and ease of installation among other things. It was reported that several users found it possible to load a database with ease apart from being able to install and design the database. Further reports suggested that users found it quite possible to adjust the performance of the database system after loading data by creating and dropping indices without interfering with the application programs or the ends user. Tables could be updated and database tables adjusted even when on read only mode. Users rated the system R. experiment as satisfactory in terms of fair consumption of resources and performance that was ostensibly reliable for a project at an experimental level. Multiple users accessed the relatively smaller System R. experimental database; the number was often restricted to ten of them. Naturally, interactive response time was longer whenever a complicated SQL statement was being executed (Codd, 1970). To solve this performance problem, a concept called normalization was taken into account. Since performance slowed down every time a complicated SQL involving several tables was being executed an alternative would be to break large database tables into smaller parts to eliminate the occurrence of redundancy and later joined back together by user applications or the view mechanism, this process is known as normalization. Normalization Normalization is the process of eliminating redundant information from tables. It improves the efficiency of a database a nd makes the data resistant to corruption. For instance, if a certain computer user had two tables labeled Black and White and he uses both of them to store peoples contact details like cell phone numbers, postal addresses, emails etc. If the User changes or someone else makes changes to either of the tables, then there is the probability that changes made in table black will not reflect in table white and vice versa. This means that if the user changed someone’s cell phone number in table white that change might not be shown in rows or columns of table black. If the change was to be shown, then it would involve tremendous amounts of work from the part of the user a case that would beat logic given that database systems are meant to improve efficiency and save the business as much time and money as possible. This problem can be solved by keeping only the ID of the person in table Black. This will in turn give the user the freedom or independence to make changes of cell phone number or to make changes related to any other contact information in table white. Such adjustments or changes would be reflected on table black automatically. References Batory, D., et al (1998). GENESIS: An Extensible Database Management System. IEEE  Trans. On Software Engineering, 11(13), 12-14. Chamberlin, D. (1990) Relational Database Management System. Computing Survey.  19(20), 5-9. Codd, F. (1970) A Relational Model of Data for Large Shared Data Banks.  Communication. ACM. 13(6), 377-387.

Monday, November 4, 2019

Therearenoabsolutedistinctionsbetweenwhatistrueandwhatisfalse.Discuss Essay

Therearenoabsolutedistinctionsbetweenwhatistrueandwhatisfalse.Discuss this claim - Essay Example e bedrock of such misbehavior and ignorance resulting from such scenarios that people deny the absolute distinctions existing between truth and falsities. That is the reason, discussion examining the differences between what is true and what is wrong has almost always been a hot topic among sociologists and psychologists, who strive to find out the facts with the help of astute researches regarding whether there is any truth in the claim that â€Å"there are no absolute distinctions between what is true and what is false.† In my opinion, this statement or claim is unequivocally false and unjustified and I believe this because were it not for the clear distinction between truths and falsities, this world would have experienced long ago a horrible mess and pandemonium in no time at all. It is only because some people clearly see and realize the differences between the correct and incorrect actions that there is some peace left in this world. Agreeing on the doubtless boundary line marked between right and wrong is a factual and reality-based concept. â€Å"Absolutism† and â€Å"relativism† are two famous ethical approaches that are highly worth mentioning when discussing this claim that are the truths and falsities two entirely separate domains or not. Absolutism lays stress on the existence and applicability of moral or ethical standards and suggests that right is right and wrong is wrong and there is a clear difference between the good and bad actions. Believers of absolutism have a clear vision for identifying the morally justified deeds and know how to distinguish them from unethical and socially offensive actions. Respecting and abiding by the established moral laws is very important according to absolutism for preserving the traditional or conventional values, which reflect the obvious difference between just and unjust things. If the theory of absolutism is scrutinized, this much becomes clear that its whole philosophy strives to fulfill the obligation of

Saturday, November 2, 2019

Forensics Research Project 3 Paper Example | Topics and Well Written Essays - 1000 words

Forensics Project 3 - Research Paper Example The primary purpose of the computer forensic processing method would emphasize certain major factors namely image copying technology, defining principles underneath computer forensic method, formalizing most appropriate methodology in order to assemble as well as to examine the activities of the users and presenting alleged data or information to a court law. With this regard, the framework of this computer forensic method will mainly comply with four major processes that comprise acquisition, identification, evaluation and presentation (Craiger, n.d.). In relation to the provided case, it can be apparently observed that organizational resources have been misused by the staffs during the work schedule in terms of using computers, network and cell phones particularly for electronic gambling. This particular practice can unfavorably impact the overall performance of the organization through various ways. For instance, the misuse of such type of organizational resources can severely imp act on the organization’s performance and reduce its competencies to achieve the ultimate goals and objectives. Moreover, the practice can also negatively effect on time, cost as well as performance of each individual attached with the organization. Steps Required to Investigate, Document and Preserve Evidence The computer forensic process with respect to the provided case scenario will be focused on conservation, recognition along with extraction of computer media with the intention of protecting the organizational resources and intellectual property. In this regard, the major functions and the implications of different steps that can be involved within the computer forensic processing method have been demonstrated in the following discussion. Step1: Acquisition The acquisition step of the proposed computer forensic processing method can be referred to assess or to keep the track of each computer system within an organization. The process significantly helps to preserve deta il evidence of the users concerning their conduct of different activities in the organization (Organization of American States, 2011). In relation to the provided case scenario, the acquisition step within the forensic method will be focused on keeping the track of uninvited or unauthorized individuals through developing any technological system within the organization. The process would enable the administrators to track and assess the activities performed by the users through allocated systems. Moreover, the step will also involve the process of preserving data or information concerning the movement of each user through their systems and enable the administrators to prepare effective documentation regarding the misuse or fraud related activities performed by the employees. Additionally, the step would also facilitate to efficiently maintain effective information flow and help the organization to protect its resources and intellectual property by a certain degree. Step2: Identifica tion The identification step is generally defined as the process of analyzing various sorts of technological aspects that include physical and logical units of the systems that are executed in an organization. The step also involves presentation of different opinion in accordance with the relevance of the