Saturday, August 31, 2019

Frankenstein Bibliography

Bewell, Alan. â€Å"An Issue of Monstrous Desire: Frankenstein and Obstetrics. † The Yale Journal of Criticism 2. 1 (1988): 105-128. Nineteenth-Century Literature Criticism. Ed. Denise Kasinec and Mary L. Onorato. Vol. 59. Detroit: Gale Research, 1997. 105-128. Literature Resource Center. Web. 12 Nov. 2009. This essay pretty much discuss how Mary Shelley gives to the development of a human being (the creature). It remarks female imagination, and how it works mimetically in fetuses. And talks alot about women pregnancy. Seabury, Marcia Bundy. The Monsters we Create: Woman on the Edge of Time and Frankenstein. † Critique: Studies in Contemporary Fiction 42. 2 (2001): 131. Literature Resource Center. Web. 12 Nov. 2009. This article points out the creature differences from the rest of the society. It also describe similarities between Walton, Frankenstein, and the creature, such as isolation and very introspective. â€Å"Through the Looking Glass: Victor Frankenstein and Ro bert Owen. † Extrapolation 43. 3 (2002): 263. Literature Resource Center. Web. 15 Nov. 2009. This article discusses the importance of education.It explains how Victor was obsessed with education. It mentions how the creature survives and handles to educate himself with less train than average people. Yousef, Nancy. â€Å"The Monster in a Dark Room: Frankenstein, Feminism, and Philosophy. † Modern Language Quarterly 63. 2 (2002): 197. Literature Resource Center. Web. 4 Nov. 2009. This article talks about Frankenstein’s endeavor, and his dream to create life without a woman. It also explains the creature’s peculiar isolation and his education. The article gives some example of the creature’s first sense an reactions.

Friday, August 30, 2019

Understand the expected pattern

Explain the sequence and the rate of each aspect of development from birth to 19 years The sequence of child development means the expected development of a child from birth to 19 years. Child development refers to the biological and psychological and emotional changes that occur within this time. As the individual progresses from dependency to increasing autonomy.Because these developmental changes may be strongly influenced by genetic factors and events during prenatal life, genetics and prenatal development are usually included as part of the study of child development. Related terms include developmental psychology, referring to development throughout the lifespan, and paediatrics, the branch of medicine relating to the care of children. Developmental change may occur as a result of genetically-controlled processes known as maturation, or as a result of environmental factors and learning, but most commonly involves an interaction between the two.It may also occur as a result of h uman ature and our ability to learn from our environment. Human beings have a keen sense to adapt to their surroundings and this is what child development encompasses. Each child usually develops at the same rate as another child. Age Intellectual Social / Emotional Language Gross motor Fine Motor Infant – Birth to one year Learns about things with hands and mouth Attaches to mother and father, begins to recognise faces and smile; at about 6 months begins to recognise parents and expresses fear of strangers.Plays simple interactive games like peek-a-boo Vocalises, squeals and imitates sounds, says ‘dada' and ‘mama'Lifts ead first then chest, rolls over, pulls to sit, crawls and stands alone Reaches for objects and picks up small items; grasps rattle Toddler 1-2 years Learns words for objects and people Learns that self and parent(s) are different or separate from each other, imitates and performs tasks, indicates needs or wants without crying Says some words other than ‘dada' and ‘mama', follows simple instructionsWalks well, kicks, stops and Jumps in place, throws balls Unbuttons clothes, builds tower of 4 cubes, scribbles, uses spoon, picks up very small object Presch0012-5 years

Thursday, August 29, 2019

Chose an interesting Topic Essay Example | Topics and Well Written Essays - 750 words

Chose an interesting Topic - Essay Example The purpose of this brief paper is to identify the various stages of relationship that my best friend and I went through growing up, identify conflicts that emerged between us, and to formulate a conclusion about the value of such interpersonal relationships. Our friendship was initiated during grade school. Ahmed and I ended up in the same class one particular year, with him being new to the school. Like most kids, I did not pay much attention to the ‘new kids’. Over time, however, I realized that we both had so much in common. We shared the same interests in sports, classes at school, and outside activities. It seemed that we would have been great brothers. During this stage of our just beginning friendship, I suppose we began to explore each other. Good friends are hard to come by, and we both wanted to make sure that we truly desired to invest the time in each other to make such a friendship work, blossom, and become a lifelong partnership. As the months of that scho ol year went by, our friendship began to intensify. This began with us playing simple games in the schoolyard. Before long, Ahmed was regularly in with my own group of friends, and he quickly became one of the ‘gang’. Over time, Ahmed and I spent more time with each other than others in the group because of our intensifying belief that we would make great friends. We had so much in common, it seemed that we were meant to help each other along life’s journey. Naturally, we were typical school age children growing up in the Middle East. We got into mischief, respected our elders when they were in our presence, and scorned them when they were not. During this intensifying portion of our friendship, Ahmed and I frequently would go to each other home’s after school, and we both got to know each other’s family well. In short, we became best friends within the span of a single school year. During the school year, life is quite hectic, especially for a youn g boy. There are many responsibilities to take care of, both at school and at home. Since Ahmed and I only met at the beginning of a school term, we needed that first year to discover one another and begin the process of growing our friendship. It was not until the end of the school year, and the beginning of the summer break, that our relationship turned quite stable. It was during that first summer that we were truly inseparable. Every day we were together, playing together and just enjoying life together. We shared our experiences together and it was during this stable stage of our relationship that I can honestly say forged our friendship for life, even when conflict eventually would arise. We remained like that for several years. A stable foundation had been laid and we were comfortable around one another. We continued to study together, played on various teams together, and our families even became great friends. On two different occasions, Ahmed took a holiday with my family, and I went with him and his family at least once that I can remember. Most every relationship will have its difficult times, and my friendship with Ahmed was no exception. While we had our minor disagreements during grade school, they were easily resolved as being over something petty. As we entered high school, however, our friendship entered into a period of decline. Looking back upon it, different interests got in our way and we both lost sight of what friendship truly was about. In high school, we were often in different classes, separate from

Wednesday, August 28, 2019

Examination skills- preparation and technique Assignment

Examination skills- preparation and technique - Assignment Example Firstly, it cannot be under-emphasized the one of the most effective techniques is to prioritize the study material. For instance, far too many people engage in the process of revising and devote equal amounts of time to each facet of the information that they might be tested upon. This is a flawed strategy due to the fact that certain parts of the information will come cleary and as second nature to the student; by means of comparison, other aspects of the information may be much harder to understand and require a more thorough approach. Similarly, the setting of revision is oftentimes overlooked. For instance, studies have proven that 1 hour of quality and uninterrupted study time is more effective than many hours of continual interrupted study time and/or distractions (Hing Sun, 2005). As such, a particularly useful technique that I have employed in the past is to set aside a give portion of time as a means of studying. In much the same way that other aspects of the day are planned out, revising can be accomplished within a similar technique. A further technique that should be employed is to resist the pitfall of seeking to memorize everything. Even if one has an exceptionally good memory, this particular approach is pointless as it creates little understanding and does not further the educational achievement of the student beyond merely regurgitating information back onto the page. Finally, and perhaps most obviously, the temptation of cramming for exams must be resisted at all costs. Although many students swear by their ability to procrastinate until the very last minute and then stay up for days at a time as a means of rapidly understanding and memorizing key information, studies and research into these techniques have definitively proven that this approach is fundamentally flawed and ultimately leads to a lower overall score as compared to those students that were able to set aside a given amount

Tuesday, August 27, 2019

ILLUSTRATION ESSAY Example | Topics and Well Written Essays - 500 words

ILLUSTRATION - Essay Example Changing light bulbs is just one thing a person can do to reduce their carbon footprint, along with recycling, driving less, and buying local. There is no doubt the new â€Å"green† light bulbs have a lot of advantages in the battle against global warming. CFLs use about 75 percent less energy and last up to 10 times longer. If all the regular light bulbs in the United States were replaced with CFLs, 158 million tons of carbon dioxide emissions, or the same carbon load as 30 million cars, would be saved (McKeown and Swire, 2009). If that were so, a quick trip in my car to the corner store for a can of soda wouldn’t have such a big impact on my carbon footprint. Compact fluorescent lights are more energy efficient because they turn more of the electricity into light rather than radiating the energy away as heat. Because of this quality, some people see the light as harsh. CFLs are coated with phosphor, which keeps certain wavelengths of light from showing up to the human eye (Fischetti 2008). I don’t think the light is harsh so much, just that it is brighter. That makes CFL bulbs an advantage, in my eyes. I can always adjust the lampshade so the light doesn’t shine directly in my eyes, and many homes and businesses have dimmer switches installed instead of regular on/off switches. Using a dimmer switch further reduces the amount of electricity needed to keep the lights on. The technology that makes CFL bulbs efficient also makes them cost more money than regular light bulbs, but manufacturers are working on lowering costs so more consumers will accept the change from regular bulbs to CFLs. Over time, the initial higher cost ba lances out in energy savings and how long the bulbs last before burning out. Governments all over the world have stepped up the push toward using more energy efficient CFL light bulbs (McKeown and Swire 2009; Fischetti 2008). As far back as 1996, more than 80 percent of Japanese households were using CFLs. Australia has already

Monday, August 26, 2019

Social Responsibility of a Business Term Paper Example | Topics and Well Written Essays - 1000 words

Social Responsibility of a Business - Term Paper Example This famous claim by Friedman however triggered a debate on what the social responsibility of a business is. The businessperson Mackey disagreed with Friedman’s thought terming it as narrow and underselling humanitarian aspect of capitalism. Mackey strongly believes that the social responsibility of a business is not only to increase profits but also to create value for all the stakeholders in the business. Mackey argues that the social responsibility of a business to shareholders, society, and the stakeholders are varied and all are satisfied in different ways, which should be taken seriously by any kind of business to be successful. I strongly agree with Mackey that the social responsibility of a business is not only to increase profits, but also to satisfy the needs of the society, shareholders, and stakeholders, which are as well important. Social responsibility of business to stakeholders The stakeholders in a business comprise the community, employees, suppliers, and cli entele. According to Mackey (2005), all these stakeholders draw the meaning of the business in their own way of satisfaction. It is worth noting that the groups’ needs are varied as well and the needs are satisfied in different ways. ... Satisfied employees in any business will translate into efficiency and quality output, which are valuable assets to the business. The social responsibility of a business to employees include, good working conditions and attractive salaries and wages, social security such as insurance and pension schemes, better living standards among others. Suppliers are as well important to a business and therefore there is need for a business to satisfy the suppliers socially. Mackey believes that all the stakeholders in a business are important for a business to attain its goals. Suppliers supply business raw materials needed to produce certain goods or services and it is their responsibility as well to get the finished products close to the customers. For the smooth functioning of the business, the social function of the business is to give them a fair deal in the business. Social responsibility of a business to shareholders The shareholders in layman’s term are the owners of a business a nd the social responsibility of the business is to satisfy their needs. Although most shareholders majorly focus on increased profits, Mackey admits this though in a different perspective. According to Mackey, profits maximization should not be the soul goal for a business but the business need to put first the interest of the entire stakeholders. Mackey (2005) argues that by putting first the interest of the stakeholders by value creation, this will act as a means to an end. As the business works hard towards maximizing profits for the investors, it is important to bear in mind that by satisfying the customers and other stakeholders, the profits are likely to increase. The shareholders being the owners of

Sunday, August 25, 2019

Federal function Essay Example | Topics and Well Written Essays - 750 words

Federal function - Essay Example In the recent past, the federal government has been faced with a looming crisis in which it plans laying off about eight hundred thousands staffs, their employment status hangs on the balance as the federal government braves to as from Saturday impose a shutdown, a move that is likely to render such a huge population staffs suspended, it will also include several agencies right from the offices to the parks rolling out their operations. A move to reverse the highly anticipated action is in its top gears as the United States president Barrack Obama has considered a crisis meeting consultation in the White house with John Boehner a speaker in the Republican House. The shutdown negative impact is due on spring where by even the tourists from the international community would experience rough rides, they will find attraction sites closed on Saturdays, sites like the Liberty statue, museums in Washington’s Smithsonian’s, former prisons of Alcatraz amongst other sites with fa scinating features will not be operational, in the mean time vital organization that deliver services like security, control of air traffic, border authorities and the all important postal services would partially operate or would totally close down. The anticipated move will not only threatens the staffs at various work places but will also affect acutely government agencies, Pentagon and the congress included. The move by the congress to classify workers into non-essentials and essentials has not augured well with most employees who value the conscious of the status. The non-essential workers would be expected not to show up for job on Mondays while the essential ones would have their schedule uninterrupted on Mondays. A further hitch is also in the rife as the staffs risk having their laptops and BlackBerries shut down. According to Jeffrey Zients White House deputy director on management and budget (Askill, 2011), the pattern of the shut down may be uneven, National parks, forests and institution of Smithsonian would remain closed as the Institute of Health Clinical Centre will consider new patients but clinical trials will remain suspended. The overseas stationed troops in countries like Iran and Afghanistan will not be give n their wages but will be paid for welfare recipient. The holidaymakers of Americans origin who been considered late for their passport applications as well as the visitors who would want to pay a visit to America and made US visas application would be compelled to eat a humble pie as their request will not succeed. The debate as to whether the non-essential workers would be paid after the shut down as in previous years also sent mixed reactions as the federal government has this year clarified their position that it won’t be as usual. What the government is doing The US president Barrack Obama had along late night meeting with Boehner and Harry Reid, the senate leader to strike a deal that would avoid the unfortunate circumstance from coming into play, Obama expressed optimism that both the parties are committed to finding a solution to what is viewed as a possible menace. He is expecting early positive responses from the Republicans in order to halt the steps facilitating t he shut down becoming a reality. On the issue, the Republicans proposes a forty billion dollars cut deficit on the federal while the Democrats resoluted on the thirty four and a half billion

Saturday, August 24, 2019

Eco fueling marketing report Research Paper Example | Topics and Well Written Essays - 4000 words

Eco fueling marketing report - Research Paper Example 8 3.4 Technological factors†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.......... 8 4.0 Customer Analysis†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 9 5.0 Competitor Analysis†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. 10 6.0 Stakeholder Analysis†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦..... 11 7.0 Internal and External Analysis (SWOT) 7.1 Strengths†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚ ¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. 11 7.2 Weaknesses†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.. 12 7.3 Opportunities†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦... 12 7.4 Threats†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦......... 12 8.0 Conclusion†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦. 13 References†¦Ã¢â‚¬ ¦Ã¢ € ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦... 14 1.0 INTRODUCTION The creation of utility, the power of goods and services to satisfy wants or needs, is of utmost importance to a marketer. For a service or product to be considered valuable in the market, it has to benefit customers and offer lucrative returns to a company (Boone and Kurtz, 2009, p. 5). Critical analysis of marketing strategies is required if a company is to attain utility for its products and services. PEST, SWOT, stakeholder, customer and environmental scan analysis must be conducted to appraise current business strategies and formulate recommendations for the establishment of new strategies or improving on current... The creation of utility, the power of goods and services to satisfy wants or needs, is of utmost importance to a marketer. For a service or product to be considered valuable in the market, it has to benefit customers and offer lucrative returns to a company (Boone and Kurtz, 2009, p. 5). Critical analysis of marketing strategies is required if a company is to attain utility for its products and services. PEST, SWOT, stakeholder, customer and environmental scan analysis must be conducted to appraise current business strategies and formulate recommendations for the establishment of new strategies or improving on current ones. E-cofueling, a company based in Brisbane, is the focus of this marketing audit report. The company, which dealing in the development and distribution of ethanol co-fueling of diesel engines, as well as development of viable emission technologies, was established in 2009. Environmental scanning refers to the progressive process of gathering data regarding different phenomena in the market with a view to identify opportunities, as well as threats. As the current market remains active, changes are inevitable, which presents its fair share of threats and opportunities to a company. In order to carry out appropriate environment scans, a marketer must carry out extensive research and gather information pertaining social, technological, competitive, regulatory and economic factors that have a direct impact on market trends.

Assignment 2 - Business Scenario Example | Topics and Well Written Essays - 1000 words

2 - Business Scenario - Assignment Example Apple became a Global compact signatory after realizing the brand and reputation toward being a participant of the compact principles. Moreover, interest grew due to the rigorous enforcing in its standards and taking proper action directed by regulatory mandates of the UN in those companies that were futile in meeting the standards of the mandate2. The principals center on areas of labor, human rights, the environment and issues pertaining to corruption. Apple fights to follow the principles to the latter, hence embraces and promotes within our global share of the market place the statutory mandate of these principles. Our company has encouraged the invention of environmental friendly products to our customers. Moreover, in the realization of the tenth principle Apple works tirelessly against the norm of all forms of corruption, for instance bribery and fraud among others. Developing technology in order to enhance protection of the environment is a fundamental issue in the drive of p romoting principle nine (9) in the UN global compact device. Apple devices less pollutant devices that are unlikely to pose adverse effects the environment. Our industry utilizes the process of recycling resources due to the sustainable usage of resources while handling the wastes in a satisfactory manner. Due to the success of these processes applied in my company, I propose favorable recommendations to the Local Network companies. Firstly, the network companies can utilize a variety of a number of cleaner processes that ensure no harm to the environment. The companies can implement corporate policies on the use of environmental safe products. Designing technologies for the long-term and sustainability, by reconstructing company research and development is a vital means of accomplishing the compliance with principle 9 and 10. Stakeholders are an essential part of any organization as they enhance the production of resources. Therefore, Network firms must engage the stakeholders in e very decision pertaining to the compliance of the two principles. Engagement of stakeholders is achievable through directing information to them. These details are those that cover on the environment aspect of performance and the advantages of using such technologies in the market. The use of ‘Environmental Technology Assessment’ (EnTA) plays a significant role in ensuring environmental safety. It entails to provide Network firms with a structured approach in assessing the consequences of technology to the environment, and therefore, offers a blueprint on the inventions that companies can manufacture. Network companies must communicate with partners and competitors to ensure the availability of best technologies to the entire industry. Many firms work with contractors when offering tenders in the early stages of production, hence these firms must promote tenders that stipulate least environmental danger. Corruption is a vile to the Network industry. In order to battle c orruption I recommend a number of strategies that were in the application at Apple Company, in the process of implementing the h10th principle. An internal assessment of the network organization and establishment of anti-corruption policies within the firm is the first step to curb corruption. The policies should cut across all the administration and employees without bias while stretching to the entire firm’

Friday, August 23, 2019

International Management Accounting Essay Example | Topics and Well Written Essays - 2750 words

International Management Accounting - Essay Example Answer: In the era of merging cultures and competition in businesses, the criticality of management role and decision-making strategies has increased significantly. Decision making on the basis of estimates and assumptions has far been obsolete. The need of a systematic approach for decision-making has been felt, by companies and organizations to improve the authenticity and accuracy of the decisions made (Gelinas et al., 2010). This need urges the researchers and analysts to devise a methodology, which covers the useful data and information about the company’s revenue, loss and expenditures, which could aid in making company plans and decisions accordingly. Previously, the method used for gathering information, which would be the base of management decisions, was the Management Information System (MIS) (Gelinas et al., 2010). This system was based on manual data collection and there were great chances of human error and delay in forming reports. Maintenance was another viable issue with this system, causing company much time and problems in extracting old data and statistics. Practices show that the ambiguity in the system, leads to unfair approach in the decision-making process, due to lack of accountability of executives to the investors or creditors (Gelinas et al., 2010). MIS was also influenced by the environment and society norms of the region. In many organizations, cultural and economic factors influence the decision-making strategy and proposals of the top level management (Nicolaou, 2000). Managers from two different religions, or two different backgrounds, would have different decision-making criterion and approach. Many a times this factor influences largely on their problem handling and planning approach, which differs from the real interest or objective of the organization. Thus, a functional method was needed, which could curtail the influence of cultural and socio-economic factors from the decision-making phenomenon (Nicolaou, 2000, pp.1 03). These factors accounts for the design of the accounting information method to use in decision-making by executives and managers. It is commonly known as the Accounting Information System (AIS), in the corporate market. Its function is to collect information and generate accurate statistical and financial reports of the company or organization. These reports are available to both the internal management and executives and the external management that are the shareholders, investors or the taxation agencies (Gelinas et al., 2010). With the accuracy and transparency AIS provides in its reports, people related to the company have a clear idea and company’s standing, and the financial ups and downs. Looking into the history of AIS, we can draw a picture, of the limitations and problems in its implementation on a wider scale. Based on computer-aided technology, AIS was installed as legacy systems, which were expensive to install and maintain. Moreover, only professionals could operate the format and language used in those systems, with high complexity in generating report and comparing two or more data (Beke, 2010).

Thursday, August 22, 2019

Representation of Women in History Essay Example for Free

Representation of Women in History Essay Throughout American history, women have been the backbone of the country, working at taking care of their families, and the country itself. The recognition of this is shown by the different representations of America in a female context. Whether as a insolent young Native American princess who has wronged her British mother, or as Roman goddess Columbia in her long, flowing white robes.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The major change in the way America was represented pictorially was brought about by Phillis Wheatley in 1775, when she sent her poem to George Washington describing America as a goddess called Columbia. The people at the time were quick to identify with this new interpretation as they wanted to distance themselves from the negative British representations of America as a Native American woman who was young and disobeying of her parental figure. Also at that time, colonists were thinking of America as a place of self-knowledge and exploration, creating libraries and other places of study, complete with mock Roman architecture that enforced the feeling of the â€Å"new Rome,† and they liked the fact that Columbia was shown as a Roman goddess of sorts.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   When looking at the differences in the print by Edward Savage and the print dated 1866, there can be seen a change from Savages peaceful looking goddess Columbia, and then the armed fighting women that are in the 1866 picture. The earlier picture dated as 1796 shows Liberty wearing a wreath of flowers around her, offering a cup to an eagle and surrounded by billowing clouds and showing her upfront, away from any violence. The latter drawing from 1866 shows three women, two holding the flag pole, and one with a sword still fighting, surrounded by people. This picture comes at the end of the Revolution era, and depicts Americas fighting spirit which has emerged from the battle.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   When looking at the example of the Eighteenth century book, Charlotte Temple by Susanna Rowson, the influence of the Columbian ideal can be shown by the book being of the seduction genre, which was very popular in that era. This type of story touched many in the nation, as people related their worrying about how they stood after going against Britain to the seduction of a young female who was brought the new land, and then tricked into getting pregnant, only to be left to die on her own. Many wondered would America suffer that same fate as the seduced young woman, or would the country triumph as the new goddess, Columbia. It is no surprise that during such a perilous time in history that people were drawn to these seduction genre stories to the point of believing in their hearts that Rowsons work was non-fiction, which is wasnt.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The recent 2005 portrait of Sacajawea is a new drawing on a golden dollar coin. She is shown as looking back, her hair drawn back, and having her son, Jean Baptiste strapped to her. This representation of her is striking with her large, dark eyes, and her true Native American features which are very pronounced and stunning. In earlier representations of Native American women, the facial features are all very close to what the features of drawings of white women at the time. These earlier images were closer to the facial likeness of early pictures of Columbia.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The United States mint clearly made this coin to represent the anniversary of the Lewis and Clark expedition, dated 1804. The recent golden dollar was dated 2005, which means that it was conceived of and based on a 2004 date, exactly 200 years apart. The coin is also meant to commemorate the Native American people themselves in history.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The representation of Columbia in American history can be seen as the evolution of the country itself. As society grew, and the perception of what it meant to be an American changed, the figures of women changed with it. The spirit of Columbia is equated with the spirit of our nation, and the artistry used to show that spirit in female form is still being used today, represented by the Sacajawea coin, celebrating the community ideal of what is is to be American.

Wednesday, August 21, 2019

Decision Tree for Prognostic Classification

Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification of Multivariate Survival Data and Competing Risks 1. Introduction Decision tree (DT) is one way to represent rules underlying data. It is the most popular tool for exploring complex data structures. Besides that it has become one of the most flexible, intuitive and powerful data analytic tools for determining distinct prognostic subgroups with similar outcome within each subgroup but different outcomes between the subgroups (i.e., prognostic grouping of patients). It is hierarchical, sequential classification structures that recursively partition the set of observations. Prognostic groups are important in assessing disease heterogeneity and for design and stratification of future clinical trials. Because patterns of medical treatment are changing so rapidly, it is important that the results of the present analysis be applicable to contemporary patients. Due to their mathematical simplicity, linear regression for continuous data, logistic regression for binary data, proportional hazard regression for censored survival data, marginal and frailty regression for multivariate survival data, and proportional subdistribution hazard regression for competing risks data are among the most commonly used statistical methods. These parametric and semiparametric regression methods, however, may not lead to faithful data descriptions when the underlying assumptions are not satisfied. Sometimes, model interpretation can be problematic in the presence of high-order interactions among predictors. DT has evolved to relax or remove the restrictive assumptions. In many cases, DT is used to explore data structures and to derive parsimonious models. DT is selected to analyze the data rather than the traditional regression analysis for several reasons. Discovery of interactions is difficult using traditional regression, because the interactions must be specified a priori. In contrast, DT automatically detects important interactions. Furthermore, unlike traditional regression analysis, DT is useful in uncovering variables that may be largely operative within a specific patient subgroup but may have minimal effect or none in other patient subgroups. Also, DT provides a superior means for prognostic classification. Rather than fitting a model to the data, DT sequentially divides the patient group into two subgroups based on prognostic factor values (e.g., tumor size The landmark work of DT in statistical community is the Classification and Regression Trees (CART) methodology of Breiman et al. (1984). A different approach was C4.5 proposed by Quinlan (1992). Original DT method was used in classification and regression for categorical and continuous response variable, respectively. In a clinical setting, however, the outcome of primary interest is often duration of survival, time to event, or some other incomplete (that is, censored) outcome. Therefore, several authors have developed extensions of original DT in the setting of censored survival data (Banerjee Noone, 2008). In science and technology, interest often lies in studying processes which generate events repeatedly over time. Such processes are referred to as recurrent event processes and the data they provide are called recurrent event data which includes in multivariate survival data. Such data arise frequently in medical studies, where information is often available on many individuals, each of whom may experience transient clinical events repeatedly over a period of observation. Examples include the occurrence of asthma attacks in respirology trials, epileptic seizures in neurology studies, and fractures in osteoporosis studies. In business, examples include the filing of warranty claims on automobiles, or insurance claims for policy holders. Since multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events, then further extensions of DT are developed for such kind of data. In some studies, patients may be simultaneously exposed to several events, each competing for their mortality or morbidity. For example, suppose that a group of patients diagnosed with heart disease is followed in order to observe a myocardial infarction (MI). If by the end of the study each patient was either observed to have MI or was alive and well, then the usual survival techniques can be applied. In real life, however, some patients may die from other causes before experiencing an MI. This is a competing risks situation because death from other causes prohibits the occurrence of MI. MI is considered the event of interest, while death from other causes is considered a competing risk. The group of patients dead of other causes cannot be considered censored, since their observations are not incomplete. The extension of DT can also be employed for competing risks survival time data. These extensions can make one apply the technique to clinical trial data to aid in the development of prognostic classifications for chronic diseases. This chapter will cover DT for multivariate and competing risks survival time data as well as their application in the development of medical prognosis. Two kinds of multivariate survival time regression model, i.e. marginal and frailty regression model, have their own DT extensions. Whereas, the extension of DT for competing risks has two types of tree. First, the â€Å"single event† DT is developed based on splitting function using one event only. Second, the â€Å"composite events† tree which use all the events jointly. 2. Decision Tree A DT is a tree-like structure used for classification, decision theory, clustering, and prediction functions. It depicts rules for dividing data into groups based on the regularities in the data. A DT can be used for categorical and continuous response variables. When the response variables are continuous, the DT is often referred to as a regression tree. If the response variables are categorical, it is called a classification tree. However, the same concepts apply to both types of trees. DTs are widely used in computer science for data structures, in medical sciences for diagnosis, in botany for classification, in psychology for decision theory, and in economic analysis for evaluating investment alternatives. DTs learn from data and generate models containing explicit rule-like relationships among the variables. DT algorithms begin with the entire set of data, split the data into two or more subsets by testing the value of a predictor variable, and then repeatedly split each subset into finer subsets until the split size reaches an appropriate level. The entire modeling process can be illustrated in a tree-like structure. A DT model consists of two parts: creating the tree and applying the tree to the data. To achieve this, DTs use several different algorithms. The most popular algorithm in the statistical community is Classification and Regression Trees (CART) (Breiman et al., 1984). This algorithm helps DTs gain credibility and acceptance in the statistics community. It creates binary splits on nominal or interval predictor variables for a nominal, ordinal, or interval response. The most widely-used algorithms by computer scientists are ID3, C4.5, and C5.0 (Quinlan, 1993). The first version of C4.5 and C5.0 were limited to categorical predictors; however, the most recent versions are similar to CART. Other algorithms include Chi-Square Automatic Interaction Detection (CHAID) for categorical response (Kass, 1980), CLS, AID, TREEDISC, Angoss KnowledgeSEEKER, CRUISE, GUIDE and QUEST (Loh, 2008). These algorithms use different approaches for splitting variables. CART, CRUISE, GUIDE and QUEST use the sta tistical approach, while CLS, ID3, and C4.5 use an approach in which the number of branches off an internal node is equal to the number of possible categories. Another common approach, used by AID, CHAID, and TREEDISC, is the one in which the number of nodes on an internal node varies from two to the maximum number of possible categories. Angoss KnowledgeSEEKER uses a combination of these approaches. Each algorithm employs different mathematical processes to determine how to group and rank variables. Let us illustrate the DT method in a simplified example of credit evaluation. Suppose a credit card issuer wants to develop a model that can be used for evaluating potential candidates based on its historical customer data. The companys main concern is the default of payment by a cardholder. Therefore, the model should be able to help the company classify a candidate as a possible defaulter or not. The database may contain millions of records and hundreds of fields. A fragment of such a database is shown in Table 1. The input variables include income, age, education, occupation, and many others, determined by some quantitative or qualitative methods. The model building process is illustrated in the tree structure in 1. The DT algorithm first selects a variable, income, to split the dataset into two subsets. This variable, and also the splitting value of $31,000, is selected by a splitting criterion of the algorithm. There exist many splitting criteria (Mingers, 1989). The basic principle of these criteria is that they all attempt to divide the data into clusters such that variations within each cluster are minimized and variations between the clusters are maximized. The follow- Name Age Income Education Occupation Default Andrew 42 45600 College Manager No Allison 26 29000 High School Self Owned Yes Sabrina 58 36800 High School Clerk No Andy 35 37300 College Engineer No †¦ Table 1. Partial records and fields of a database table for credit evaluation up splits are similar to the first one. The process continues until an appropriate tree size is reached. 1 shows a segment of the DT. Based on this tree model, a candidate with income at least $31,000 and at least college degree is unlikely to default the payment; but a self-employed candidate whose income is less than $31,000 and age is less than 28 is more likely to default. We begin with a discussion of the general structure of a popular DT algorithm in statistical community, i.e. CART model. A CART model describes the conditional distribution of y given X, where y is the response variable and X is a set of predictor variables (X = (X1,X2,†¦,Xp)). This model has two main components: a tree T with b terminal nodes, and a parameter Q = (q1,q2,†¦, qb) ÃÅ' Rk which associates the parameter values qm, with the mth terminal node. Thus a tree model is fully specified by the pair (T, Q). If X lies in the region corresponding to the mth terminal node then y|X has the distribution f(y|qm), where we use f to represent a conditional distribution indexed by qm. The model is called a regression tree or a classification tree according to whether the response y is quantitative or qualitative, respectively. 2.1 Splitting a tree The DT T subdivides the predictor variable space as follows. Each internal node has an associated splitting rule which uses a predictor to assign observations to either its left or right child node. The internal nodes are thus partitioned into two subsequent nodes using the splitting rule. For quantitative predictors, the splitting rule is based on a split rule c, and assigns observations for which {xi For a regression tree, conventional algorithm models the response in each region Rm as a constant qm. Thus the overall tree model can be expressed as (Hastie et al., 2001): (1) where Rm, m = 1, 2,†¦,b consist of a partition of the predictors space, and therefore representing the space of b terminal nodes. If we adopt the method of minimizing the sum of squares as our criterion to characterize the best split, it is easy to see that the best , is just the average of yi in region Rm: (2) where Nm is the number of observations falling in node m. The residual sum of squares is (3) which will serve as an impurity measure for regression trees. If the response is a factor taking outcomes 1,2, K, the impurity measure Qm(T), defined in (3) is not suitable. Instead, we represent a region Rm with Nm observations with (4) which is the proportion of class k(k ÃŽ {1, 2,†¦,K}) observations in node m. We classify the observations in node m to a class , the majority class in node m. Different measures Qm(T) of node impurity include the following (Hastie et al., 2001): Misclassification error: Gini index: Cross-entropy or deviance: (5) For binary outcomes, if p is the proportion of the second class, these three measures are 1 max(p, 1 p), 2p(1 p) and -p log p (1 p) log(1 p), respectively. All three definitions of impurity are concave, having minimums at p = 0 and p = 1 and a maximum at p = 0.5. Entropy and the Gini index are the most common, and generally give very similar results except when there are two response categories. 2.2 Pruning a tree To be consistent with conventional notations, lets define the impurity of a node h as I(h) ((3) for a regression tree, and any one in (5) for a classification tree). We then choose the split with maximal impurity reduction (6) where hL and hR are the left and right children nodes of h and p(h) is proportion of sample fall in node h. How large should we grow the tree then? Clearly a very large tree might overfit the data, while a small tree may not be able to capture the important structure. Tree size is a tuning parameter governing the models complexity, and the optimal tree size should be adaptively chosen from the data. One approach would be to continue the splitting procedures until the decrease on impurity due to the split exceeds some threshold. This strategy is too short-sighted, however, since a seeming worthless split might lead to a very good split below it. The preferred strategy is to grow a large tree T0, stopping the splitting process when some minimum number of observations in a terminal node (say 10) is reached. Then this large tree is pruned using pruning algorithm, such as cost-complexity or split complexity pruning algorithm. To prune large tree T0 by using cost-complexity algorithm, we define a subtree T T0 to be any tree that can be obtained by pruning T0, and define to be the set of terminal nodes of T. That is, collapsing any number of its terminal nodes. As before, we index terminal nodes by m, with node m representing region Rm. Let denotes the number of terminal nodes in T (= b). We use instead of b following the conventional notation and define the risk of trees and define cost of tree as Regression tree: , Classification tree: , (7) where r(h) measures the impurity of node h in a classification tree (can be any one in (5)). We define the cost complexity criterion (Breiman et al., 1984) (8) where a(> 0) is the complexity parameter. The idea is, for each a, find the subtree Ta T0 to minimize Ra(T). The tuning parameter a > 0 governs the tradeoff between tree size and its goodness of fit to the data (Hastie et al., 2001). Large values of a result in smaller tree Ta and conversely for smaller values of a. As the notation suggests, with a = 0 the solution is the full tree T0. To find Ta we use weakest link pruning: we successively collapse the internal node that produces the smallest per-node increase in R(T), and continue until we produce the single-node (root) tree. This gives a (finite) sequence of subtrees, and one can show this sequence must contains Ta. See Brieman et al. (1984) and Ripley (1996) for details. Estimation of a () is achieved by five- or ten-fold cross-validation. Our final tree is then denoted as . It follows that, in CART and related algorithms, classification and regression trees are produced from data in two stages. In the first stage, a large initial tree is produced by splitting one node at a time in an iterative, greedy fashion. In the second stage, a small subtree of the initial tree is selected, using the same data set. Whereas the splitting procedure proceeds in a top-down fashion, the second stage, known as pruning, proceeds from the bottom-up by successively removing nodes from the initial tree. Theorem 1 (Brieman et al., 1984, Section 3.3) For any value of the complexity parameter a, there is a unique smallest subtree of T0 that minimizes the cost-complexity. Theorem 2 (Zhang Singer, 1999, Section 4.2) If a2 > al, the optimal sub-tree corresponding to a2 is a subtree of the optimal subtree corresponding to al. More general, suppose we end up with m thresholds, 0 (9) where means that is a subtree of . These are called nested optimal subtrees. 3. Decision Tree for Censored Survival Data Survival analysis is the phrase used to describe the analysis of data that correspond to the time from a well-defined time origin until the occurrence of some particular events or end-points. It is important to state what the event is and when the period of observation starts and finish. In medical research, the time origin will often correspond to the recruitment of an individual into an experimental study, and the end-point is the death of the patient or the occurrence of some adverse events. Survival data are rarely normally distributed, but are skewed and comprise typically of many early events and relatively few late ones. It is these features of the data that necessitate the special method survival analysis. The specific difficulties relating to survival analysis arise largely from the fact that only some individuals have experienced the event and, subsequently, survival times will be unknown for a subset of the study group. This phenomenon is called censoring and it may arise in the following ways: (a) a patient has not (yet) experienced the relevant outcome, such as relapse or death, by the time the study has to end; (b) a patient is lost to follow-up during the study period; (c) a patient experiences a different event that makes further follow-up impossible. Generally, censoring times may vary from individual to individual. Such censored survival time underestimated the true (but unknown) time to event. Visualising the survival process of an individual as a time-line, the event (assuming it is to occur) is beyond the end of the follow-up period. This situation is often called right censoring. Most survival data include right censored observation. In many biomedical and reliability studies, interest focuses on relating the time to event to a set of covariates. Cox proportional hazard model (Cox, 1972) has been established as the major framework for analysis of such survival data over the past three decades. But, often in practices, one primary goal of survival analysis is to extract meaningful subgroups of patients determined by the prognostic factors such as patient characteristics that are related to the level of disease. Although proportional hazard model and its extensions are powerful in studying the association between covariates and survival times, usually they are problematic in prognostic classification. One approach for classification is to compute a risk score based on the estimated coefficients from regression methods (Machin et al., 2006). This approach, however, may be problematic for several reasons. First, the definition of risk groups is arbitrary. Secondly, the risk score depends on the correct specification of the model. It is difficult to check whether the model is correct when many covariates are involved. Thirdly, when there are many interaction terms and the model becomes complicated, the result becomes difficult to interpret for the purpose of prognostic classification. Finally, a more serious problem is that an invalid prognostic group may be produced if no patient is included in a covariate profile. In contrast, DT methods do not suffer from these problems. Owing to the development of fast computers, computer-intensive methods such as DT methods have become popular. Since these investigate the significance of all potential risk factors automatically and provide interpretable models, they offer distinct advantages to analysts. Recently a large amount of DT methods have been developed for the analysis of survival data, where the basic concepts for growing and pruning trees remain unchanged, but the choice of the splitting criterion has been modified to incorporate the censored survival data. The application of DT methods for survival data are described by a number of authors (Gordon Olshen, 1985; Ciampi et al., 1986; Segal, 1988; Davis Anderson, 1989; Therneau et al., 1990; LeBlanc Crowley, 1992; LeBlanc Crowley, 1993; Ahn Loh, 1994; Bacchetti Segal, 1995; Huang et al., 1998; KeleÃ…Å ¸ Segal, 2002; Jin et al., 2004; Cappelli Zhang, 2007; Cho Hong, 2008), including the text by Zhang Singer (1999). 4. Decision Tree for Multivariate Censored Survival Data Multivariate survival data frequently arise when we faced the complexity of studies involving multiple treatment centres, family members and measurements repeatedly made on the same individual. For example, in multi-centre clinical trials, the outcomes for groups of patients at several centres are examined. In some instances, patients in a centre might exhibit similar responses due to uniformity of surroundings and procedures within a centre. This would result in correlated outcomes at the level of the treatment centre. For the situation of studies of family members or litters, correlation in outcome is likely for genetic reasons. In this case, the outcomes would be correlated at the family or litter level. Finally, when one person or animal is measured repeatedly over time, correlation will most definitely exist in those responses. Within the context of correlated data, the observations which are correlated for a group of individuals (within a treatment centre or a family) or for on e individual (because of repeated sampling) are referred to as a cluster, so that from this point on, the responses within a cluster will be assumed to be correlated. Analysis of multivariate survival data is complex due to the presence of dependence among survival times and unknown marginal distributions. Multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events. A successful treatment of correlated failure times was made by Clayton and Cuzik (1985) who modelled the dependence structure with a frailty term. Another approach is based on a proportional hazard formulation of the marginal hazard function, which has been studied by Wei et al. (1989) and Liang et al. (1993). Noticeably, Prentice et al. (1981) and Andersen Gill (1982) also suggested two alternative approaches to analyze multiple event times. Extension of tree techniques to multivariate censored data is motivated by the classification issue associated with multivariate survival data. For example, clinical investigators design studies to form prognostic rules. Credit risk analysts collect account information to build up credit scoring criteria. Frequently, in such studies the outcomes of ultimate interest are correlated times to event, such as relapses, late payments, or bankruptcies. Since DT methods recursively partition the predictor space, they are an alternative to conventional regression tools. This section is concerned with the generalization of DT models to multivariate survival data. In attempt to facilitate an extension of DT methods to multivariate survival data, more difficulties need to be circumvented. 4.1 Decision tree for multivariate survival data based on marginal model DT methods for multivariate survival data are not many. Almost all the multivariate DT methods have been based on between-node heterogeneity, with the exception of Molinaro et al. (2004) who proposed a general within-node homogeneity approach for both univariate and multivariate data. The multivariate methods proposed by Su Fan (2001, 2004) and Gao et al. (2004, 2006) concentrated on between-node heterogeneity and used the results of regression models. Specifically, for recurrent event data and clustered event data, Su Fan (2004) used likelihood-ratio tests while Gao et al. (2004) used robust Wald tests from a gamma frailty model to maximize the between-node heterogeneity. Su Fan (2001) and Fan et al. (2006) used a robust log-rank statistic while Gao et al. (2006) used a robust Wald test from the marginal failure-time model of Wei et al. (1989). The generalization of DT for multivariate survival data is developed by using goodness of split approach. DT by goodness of split is grown by maximizing a measure of between-node difference. Therefore, only internal nodes have associated two-sample statistics. The tree structure is different from CART because, for trees grown by minimizing within-node error, each node, either terminal or internal, has an associated impurity measure. This is why the CART pruning procedure is not directly applicable to such types of trees. However, the split-complexity pruning algorithm of LeBlanc Crowley (1993) has resulted in trees by goodness of split that has become well-developed tools. This modified tree technique not only provides a convenient way of handling survival data, but also enlarges the applied scope of DT methods in a more general sense. Especially for those situations where defining prediction error terms is relatively difficult, growing trees by a two-sample statistic, together with the split-complexity pruning, offers a feasible way of performing tree analysis. The DT procedure consists of three parts: a method to partition the data recursively into a large tree, a method to prune the large tree into a subtree sequence, and a method to determine the optimal tree size. In the multivariate survival trees, the between-node difference is measured by a robust Wald statistic, which is derived from a marginal approach to multivariate survival data that was developed by Wei et al. (1989). We used split-complexity pruning borrowed from LeBlanc Crowley (1993) and use test sample for determining the right tree size. 4.1.1 The splitting statistic We consider n independent subjects but each subject to have K potential types or number of failures. If there are an unequal number of failures within the subjects, then K is the maximum. We let Tik = min(Yik,Cik ) where Yik = time of the failure in the ith subject for the kth type of failure and Cik = potential censoring time of the ith subject for the kth type of failure with i = 1,†¦,n and k = 1,†¦,K. Then dik = I (Yik ≠¤ Cik) is the indicator for failure and the vector of covariates is denoted Zik = (Z1ik,†¦, Zpik)T. To partition the data, we consider the hazard model for the ith unit for the kth type of failure, using the distinguishable baseline hazard as described by Wei et al. (1989), namely where the indicator function I(Zik Parameter b is estimated by maximizing the partial likelihood. If the observations within the same unit are independent, the partial likelihood functions for b for the distinguishable baseline model (10) would be, (11) Since the observations within the same unit are not independent for multivariate failure time, we refer to the above functions as the pseudo-partial likelihood. The estimator can be obtained by maximizing the likelihood by solving . Wei et al. (1989) showed that is normally distributed with mean 0. However the usual estimate, a-1(b), for the variance of , where (12) is not valid. We refer to a-1(b) as the naà ¯ve estimator. Wei et al. (1989) showed that the correct estimated (robust) variance estimator of is (13) where b(b) is weight and d(b) is often referred to as the robust or sandwich variance estimator. Hence, the robust Wald statistic corresponding to the null hypothesis H0 : b = 0 is (14) 4.1.2 Tree growing To grow a tree, the robust Wald statistic is evaluated for every possible binary split of the predictor space Z. The split, s, could be of several forms: splits on a single covariate, splits on linear combinations of predictors, and boolean combination of splits. The simplest form of split relates to only one covariate, where the split depends on the type of covariate whether it is ordered or nominal covariate. The â€Å"best split† is defined to be the one corresponding to the maximum robust Wald statistic. Subsequently the data are divided into two groups according to the best split. Apply this splitting scheme recursively to the learning sample until the predictor space is partitioned into many regions. There will be no further partition to a node when any of the following occurs: The node contains less than, say 10 or 20, subjects, if the overall sample size is large enough to permit this. We suggest using a larger minimum node size than used in CART where the default value is 5; All the observed times in the subset are censored, which results in unavailability of the robust Wald statistic for any split; All the subjects have identical covariate vectors. Or the node has only complete observations with identical survival times. In these situations, the node is considered as pure. The whole procedure results in a large tree, which could be used for the purpose of data structure exploration. 4.1.3 Tree pruning Let T denote either a particular tree or the set of all its nodes. Let S and denote the set of internal nodes and terminal nodes of T, respectively. Therefore, . Also let |Ãâ€"| denote the number of nodes. Let G(h) represent the maximum robust Wald statistic on a particular (internal) node h. In order to measure the performance of a tree, a split-complexity measure Ga(T) is introduced as in LeBlanc and Crowley (1993). That is, (15) where the number of internal nodes, |S|, measures complexity; G(T) measures goodness of split in T; and the complexity parameter a acts as a penalty for each additional split. Start with the large tree T0 obtained from the splitting procedure. For any internal node h of T0, i.e. h ÃŽ S0, a function g(h) is defined as (16) where Th denotes the branch with h as its root and Sh is the set of all internal nodes of Th. Then the weakest link in T0 is the node such that   < Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification of Multivariate Survival Data and Competing Risks 1. Introduction Decision tree (DT) is one way to represent rules underlying data. It is the most popular tool for exploring complex data structures. Besides that it has become one of the most flexible, intuitive and powerful data analytic tools for determining distinct prognostic subgroups with similar outcome within each subgroup but different outcomes between the subgroups (i.e., prognostic grouping of patients). It is hierarchical, sequential classification structures that recursively partition the set of observations. Prognostic groups are important in assessing disease heterogeneity and for design and stratification of future clinical trials. Because patterns of medical treatment are changing so rapidly, it is important that the results of the present analysis be applicable to contemporary patients. Due to their mathematical simplicity, linear regression for continuous data, logistic regression for binary data, proportional hazard regression for censored survival data, marginal and frailty regression for multivariate survival data, and proportional subdistribution hazard regression for competing risks data are among the most commonly used statistical methods. These parametric and semiparametric regression methods, however, may not lead to faithful data descriptions when the underlying assumptions are not satisfied. Sometimes, model interpretation can be problematic in the presence of high-order interactions among predictors. DT has evolved to relax or remove the restrictive assumptions. In many cases, DT is used to explore data structures and to derive parsimonious models. DT is selected to analyze the data rather than the traditional regression analysis for several reasons. Discovery of interactions is difficult using traditional regression, because the interactions must be specified a priori. In contrast, DT automatically detects important interactions. Furthermore, unlike traditional regression analysis, DT is useful in uncovering variables that may be largely operative within a specific patient subgroup but may have minimal effect or none in other patient subgroups. Also, DT provides a superior means for prognostic classification. Rather than fitting a model to the data, DT sequentially divides the patient group into two subgroups based on prognostic factor values (e.g., tumor size The landmark work of DT in statistical community is the Classification and Regression Trees (CART) methodology of Breiman et al. (1984). A different approach was C4.5 proposed by Quinlan (1992). Original DT method was used in classification and regression for categorical and continuous response variable, respectively. In a clinical setting, however, the outcome of primary interest is often duration of survival, time to event, or some other incomplete (that is, censored) outcome. Therefore, several authors have developed extensions of original DT in the setting of censored survival data (Banerjee Noone, 2008). In science and technology, interest often lies in studying processes which generate events repeatedly over time. Such processes are referred to as recurrent event processes and the data they provide are called recurrent event data which includes in multivariate survival data. Such data arise frequently in medical studies, where information is often available on many individuals, each of whom may experience transient clinical events repeatedly over a period of observation. Examples include the occurrence of asthma attacks in respirology trials, epileptic seizures in neurology studies, and fractures in osteoporosis studies. In business, examples include the filing of warranty claims on automobiles, or insurance claims for policy holders. Since multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events, then further extensions of DT are developed for such kind of data. In some studies, patients may be simultaneously exposed to several events, each competing for their mortality or morbidity. For example, suppose that a group of patients diagnosed with heart disease is followed in order to observe a myocardial infarction (MI). If by the end of the study each patient was either observed to have MI or was alive and well, then the usual survival techniques can be applied. In real life, however, some patients may die from other causes before experiencing an MI. This is a competing risks situation because death from other causes prohibits the occurrence of MI. MI is considered the event of interest, while death from other causes is considered a competing risk. The group of patients dead of other causes cannot be considered censored, since their observations are not incomplete. The extension of DT can also be employed for competing risks survival time data. These extensions can make one apply the technique to clinical trial data to aid in the development of prognostic classifications for chronic diseases. This chapter will cover DT for multivariate and competing risks survival time data as well as their application in the development of medical prognosis. Two kinds of multivariate survival time regression model, i.e. marginal and frailty regression model, have their own DT extensions. Whereas, the extension of DT for competing risks has two types of tree. First, the â€Å"single event† DT is developed based on splitting function using one event only. Second, the â€Å"composite events† tree which use all the events jointly. 2. Decision Tree A DT is a tree-like structure used for classification, decision theory, clustering, and prediction functions. It depicts rules for dividing data into groups based on the regularities in the data. A DT can be used for categorical and continuous response variables. When the response variables are continuous, the DT is often referred to as a regression tree. If the response variables are categorical, it is called a classification tree. However, the same concepts apply to both types of trees. DTs are widely used in computer science for data structures, in medical sciences for diagnosis, in botany for classification, in psychology for decision theory, and in economic analysis for evaluating investment alternatives. DTs learn from data and generate models containing explicit rule-like relationships among the variables. DT algorithms begin with the entire set of data, split the data into two or more subsets by testing the value of a predictor variable, and then repeatedly split each subset into finer subsets until the split size reaches an appropriate level. The entire modeling process can be illustrated in a tree-like structure. A DT model consists of two parts: creating the tree and applying the tree to the data. To achieve this, DTs use several different algorithms. The most popular algorithm in the statistical community is Classification and Regression Trees (CART) (Breiman et al., 1984). This algorithm helps DTs gain credibility and acceptance in the statistics community. It creates binary splits on nominal or interval predictor variables for a nominal, ordinal, or interval response. The most widely-used algorithms by computer scientists are ID3, C4.5, and C5.0 (Quinlan, 1993). The first version of C4.5 and C5.0 were limited to categorical predictors; however, the most recent versions are similar to CART. Other algorithms include Chi-Square Automatic Interaction Detection (CHAID) for categorical response (Kass, 1980), CLS, AID, TREEDISC, Angoss KnowledgeSEEKER, CRUISE, GUIDE and QUEST (Loh, 2008). These algorithms use different approaches for splitting variables. CART, CRUISE, GUIDE and QUEST use the sta tistical approach, while CLS, ID3, and C4.5 use an approach in which the number of branches off an internal node is equal to the number of possible categories. Another common approach, used by AID, CHAID, and TREEDISC, is the one in which the number of nodes on an internal node varies from two to the maximum number of possible categories. Angoss KnowledgeSEEKER uses a combination of these approaches. Each algorithm employs different mathematical processes to determine how to group and rank variables. Let us illustrate the DT method in a simplified example of credit evaluation. Suppose a credit card issuer wants to develop a model that can be used for evaluating potential candidates based on its historical customer data. The companys main concern is the default of payment by a cardholder. Therefore, the model should be able to help the company classify a candidate as a possible defaulter or not. The database may contain millions of records and hundreds of fields. A fragment of such a database is shown in Table 1. The input variables include income, age, education, occupation, and many others, determined by some quantitative or qualitative methods. The model building process is illustrated in the tree structure in 1. The DT algorithm first selects a variable, income, to split the dataset into two subsets. This variable, and also the splitting value of $31,000, is selected by a splitting criterion of the algorithm. There exist many splitting criteria (Mingers, 1989). The basic principle of these criteria is that they all attempt to divide the data into clusters such that variations within each cluster are minimized and variations between the clusters are maximized. The follow- Name Age Income Education Occupation Default Andrew 42 45600 College Manager No Allison 26 29000 High School Self Owned Yes Sabrina 58 36800 High School Clerk No Andy 35 37300 College Engineer No †¦ Table 1. Partial records and fields of a database table for credit evaluation up splits are similar to the first one. The process continues until an appropriate tree size is reached. 1 shows a segment of the DT. Based on this tree model, a candidate with income at least $31,000 and at least college degree is unlikely to default the payment; but a self-employed candidate whose income is less than $31,000 and age is less than 28 is more likely to default. We begin with a discussion of the general structure of a popular DT algorithm in statistical community, i.e. CART model. A CART model describes the conditional distribution of y given X, where y is the response variable and X is a set of predictor variables (X = (X1,X2,†¦,Xp)). This model has two main components: a tree T with b terminal nodes, and a parameter Q = (q1,q2,†¦, qb) ÃÅ' Rk which associates the parameter values qm, with the mth terminal node. Thus a tree model is fully specified by the pair (T, Q). If X lies in the region corresponding to the mth terminal node then y|X has the distribution f(y|qm), where we use f to represent a conditional distribution indexed by qm. The model is called a regression tree or a classification tree according to whether the response y is quantitative or qualitative, respectively. 2.1 Splitting a tree The DT T subdivides the predictor variable space as follows. Each internal node has an associated splitting rule which uses a predictor to assign observations to either its left or right child node. The internal nodes are thus partitioned into two subsequent nodes using the splitting rule. For quantitative predictors, the splitting rule is based on a split rule c, and assigns observations for which {xi For a regression tree, conventional algorithm models the response in each region Rm as a constant qm. Thus the overall tree model can be expressed as (Hastie et al., 2001): (1) where Rm, m = 1, 2,†¦,b consist of a partition of the predictors space, and therefore representing the space of b terminal nodes. If we adopt the method of minimizing the sum of squares as our criterion to characterize the best split, it is easy to see that the best , is just the average of yi in region Rm: (2) where Nm is the number of observations falling in node m. The residual sum of squares is (3) which will serve as an impurity measure for regression trees. If the response is a factor taking outcomes 1,2, K, the impurity measure Qm(T), defined in (3) is not suitable. Instead, we represent a region Rm with Nm observations with (4) which is the proportion of class k(k ÃŽ {1, 2,†¦,K}) observations in node m. We classify the observations in node m to a class , the majority class in node m. Different measures Qm(T) of node impurity include the following (Hastie et al., 2001): Misclassification error: Gini index: Cross-entropy or deviance: (5) For binary outcomes, if p is the proportion of the second class, these three measures are 1 max(p, 1 p), 2p(1 p) and -p log p (1 p) log(1 p), respectively. All three definitions of impurity are concave, having minimums at p = 0 and p = 1 and a maximum at p = 0.5. Entropy and the Gini index are the most common, and generally give very similar results except when there are two response categories. 2.2 Pruning a tree To be consistent with conventional notations, lets define the impurity of a node h as I(h) ((3) for a regression tree, and any one in (5) for a classification tree). We then choose the split with maximal impurity reduction (6) where hL and hR are the left and right children nodes of h and p(h) is proportion of sample fall in node h. How large should we grow the tree then? Clearly a very large tree might overfit the data, while a small tree may not be able to capture the important structure. Tree size is a tuning parameter governing the models complexity, and the optimal tree size should be adaptively chosen from the data. One approach would be to continue the splitting procedures until the decrease on impurity due to the split exceeds some threshold. This strategy is too short-sighted, however, since a seeming worthless split might lead to a very good split below it. The preferred strategy is to grow a large tree T0, stopping the splitting process when some minimum number of observations in a terminal node (say 10) is reached. Then this large tree is pruned using pruning algorithm, such as cost-complexity or split complexity pruning algorithm. To prune large tree T0 by using cost-complexity algorithm, we define a subtree T T0 to be any tree that can be obtained by pruning T0, and define to be the set of terminal nodes of T. That is, collapsing any number of its terminal nodes. As before, we index terminal nodes by m, with node m representing region Rm. Let denotes the number of terminal nodes in T (= b). We use instead of b following the conventional notation and define the risk of trees and define cost of tree as Regression tree: , Classification tree: , (7) where r(h) measures the impurity of node h in a classification tree (can be any one in (5)). We define the cost complexity criterion (Breiman et al., 1984) (8) where a(> 0) is the complexity parameter. The idea is, for each a, find the subtree Ta T0 to minimize Ra(T). The tuning parameter a > 0 governs the tradeoff between tree size and its goodness of fit to the data (Hastie et al., 2001). Large values of a result in smaller tree Ta and conversely for smaller values of a. As the notation suggests, with a = 0 the solution is the full tree T0. To find Ta we use weakest link pruning: we successively collapse the internal node that produces the smallest per-node increase in R(T), and continue until we produce the single-node (root) tree. This gives a (finite) sequence of subtrees, and one can show this sequence must contains Ta. See Brieman et al. (1984) and Ripley (1996) for details. Estimation of a () is achieved by five- or ten-fold cross-validation. Our final tree is then denoted as . It follows that, in CART and related algorithms, classification and regression trees are produced from data in two stages. In the first stage, a large initial tree is produced by splitting one node at a time in an iterative, greedy fashion. In the second stage, a small subtree of the initial tree is selected, using the same data set. Whereas the splitting procedure proceeds in a top-down fashion, the second stage, known as pruning, proceeds from the bottom-up by successively removing nodes from the initial tree. Theorem 1 (Brieman et al., 1984, Section 3.3) For any value of the complexity parameter a, there is a unique smallest subtree of T0 that minimizes the cost-complexity. Theorem 2 (Zhang Singer, 1999, Section 4.2) If a2 > al, the optimal sub-tree corresponding to a2 is a subtree of the optimal subtree corresponding to al. More general, suppose we end up with m thresholds, 0 (9) where means that is a subtree of . These are called nested optimal subtrees. 3. Decision Tree for Censored Survival Data Survival analysis is the phrase used to describe the analysis of data that correspond to the time from a well-defined time origin until the occurrence of some particular events or end-points. It is important to state what the event is and when the period of observation starts and finish. In medical research, the time origin will often correspond to the recruitment of an individual into an experimental study, and the end-point is the death of the patient or the occurrence of some adverse events. Survival data are rarely normally distributed, but are skewed and comprise typically of many early events and relatively few late ones. It is these features of the data that necessitate the special method survival analysis. The specific difficulties relating to survival analysis arise largely from the fact that only some individuals have experienced the event and, subsequently, survival times will be unknown for a subset of the study group. This phenomenon is called censoring and it may arise in the following ways: (a) a patient has not (yet) experienced the relevant outcome, such as relapse or death, by the time the study has to end; (b) a patient is lost to follow-up during the study period; (c) a patient experiences a different event that makes further follow-up impossible. Generally, censoring times may vary from individual to individual. Such censored survival time underestimated the true (but unknown) time to event. Visualising the survival process of an individual as a time-line, the event (assuming it is to occur) is beyond the end of the follow-up period. This situation is often called right censoring. Most survival data include right censored observation. In many biomedical and reliability studies, interest focuses on relating the time to event to a set of covariates. Cox proportional hazard model (Cox, 1972) has been established as the major framework for analysis of such survival data over the past three decades. But, often in practices, one primary goal of survival analysis is to extract meaningful subgroups of patients determined by the prognostic factors such as patient characteristics that are related to the level of disease. Although proportional hazard model and its extensions are powerful in studying the association between covariates and survival times, usually they are problematic in prognostic classification. One approach for classification is to compute a risk score based on the estimated coefficients from regression methods (Machin et al., 2006). This approach, however, may be problematic for several reasons. First, the definition of risk groups is arbitrary. Secondly, the risk score depends on the correct specification of the model. It is difficult to check whether the model is correct when many covariates are involved. Thirdly, when there are many interaction terms and the model becomes complicated, the result becomes difficult to interpret for the purpose of prognostic classification. Finally, a more serious problem is that an invalid prognostic group may be produced if no patient is included in a covariate profile. In contrast, DT methods do not suffer from these problems. Owing to the development of fast computers, computer-intensive methods such as DT methods have become popular. Since these investigate the significance of all potential risk factors automatically and provide interpretable models, they offer distinct advantages to analysts. Recently a large amount of DT methods have been developed for the analysis of survival data, where the basic concepts for growing and pruning trees remain unchanged, but the choice of the splitting criterion has been modified to incorporate the censored survival data. The application of DT methods for survival data are described by a number of authors (Gordon Olshen, 1985; Ciampi et al., 1986; Segal, 1988; Davis Anderson, 1989; Therneau et al., 1990; LeBlanc Crowley, 1992; LeBlanc Crowley, 1993; Ahn Loh, 1994; Bacchetti Segal, 1995; Huang et al., 1998; KeleÃ…Å ¸ Segal, 2002; Jin et al., 2004; Cappelli Zhang, 2007; Cho Hong, 2008), including the text by Zhang Singer (1999). 4. Decision Tree for Multivariate Censored Survival Data Multivariate survival data frequently arise when we faced the complexity of studies involving multiple treatment centres, family members and measurements repeatedly made on the same individual. For example, in multi-centre clinical trials, the outcomes for groups of patients at several centres are examined. In some instances, patients in a centre might exhibit similar responses due to uniformity of surroundings and procedures within a centre. This would result in correlated outcomes at the level of the treatment centre. For the situation of studies of family members or litters, correlation in outcome is likely for genetic reasons. In this case, the outcomes would be correlated at the family or litter level. Finally, when one person or animal is measured repeatedly over time, correlation will most definitely exist in those responses. Within the context of correlated data, the observations which are correlated for a group of individuals (within a treatment centre or a family) or for on e individual (because of repeated sampling) are referred to as a cluster, so that from this point on, the responses within a cluster will be assumed to be correlated. Analysis of multivariate survival data is complex due to the presence of dependence among survival times and unknown marginal distributions. Multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events. A successful treatment of correlated failure times was made by Clayton and Cuzik (1985) who modelled the dependence structure with a frailty term. Another approach is based on a proportional hazard formulation of the marginal hazard function, which has been studied by Wei et al. (1989) and Liang et al. (1993). Noticeably, Prentice et al. (1981) and Andersen Gill (1982) also suggested two alternative approaches to analyze multiple event times. Extension of tree techniques to multivariate censored data is motivated by the classification issue associated with multivariate survival data. For example, clinical investigators design studies to form prognostic rules. Credit risk analysts collect account information to build up credit scoring criteria. Frequently, in such studies the outcomes of ultimate interest are correlated times to event, such as relapses, late payments, or bankruptcies. Since DT methods recursively partition the predictor space, they are an alternative to conventional regression tools. This section is concerned with the generalization of DT models to multivariate survival data. In attempt to facilitate an extension of DT methods to multivariate survival data, more difficulties need to be circumvented. 4.1 Decision tree for multivariate survival data based on marginal model DT methods for multivariate survival data are not many. Almost all the multivariate DT methods have been based on between-node heterogeneity, with the exception of Molinaro et al. (2004) who proposed a general within-node homogeneity approach for both univariate and multivariate data. The multivariate methods proposed by Su Fan (2001, 2004) and Gao et al. (2004, 2006) concentrated on between-node heterogeneity and used the results of regression models. Specifically, for recurrent event data and clustered event data, Su Fan (2004) used likelihood-ratio tests while Gao et al. (2004) used robust Wald tests from a gamma frailty model to maximize the between-node heterogeneity. Su Fan (2001) and Fan et al. (2006) used a robust log-rank statistic while Gao et al. (2006) used a robust Wald test from the marginal failure-time model of Wei et al. (1989). The generalization of DT for multivariate survival data is developed by using goodness of split approach. DT by goodness of split is grown by maximizing a measure of between-node difference. Therefore, only internal nodes have associated two-sample statistics. The tree structure is different from CART because, for trees grown by minimizing within-node error, each node, either terminal or internal, has an associated impurity measure. This is why the CART pruning procedure is not directly applicable to such types of trees. However, the split-complexity pruning algorithm of LeBlanc Crowley (1993) has resulted in trees by goodness of split that has become well-developed tools. This modified tree technique not only provides a convenient way of handling survival data, but also enlarges the applied scope of DT methods in a more general sense. Especially for those situations where defining prediction error terms is relatively difficult, growing trees by a two-sample statistic, together with the split-complexity pruning, offers a feasible way of performing tree analysis. The DT procedure consists of three parts: a method to partition the data recursively into a large tree, a method to prune the large tree into a subtree sequence, and a method to determine the optimal tree size. In the multivariate survival trees, the between-node difference is measured by a robust Wald statistic, which is derived from a marginal approach to multivariate survival data that was developed by Wei et al. (1989). We used split-complexity pruning borrowed from LeBlanc Crowley (1993) and use test sample for determining the right tree size. 4.1.1 The splitting statistic We consider n independent subjects but each subject to have K potential types or number of failures. If there are an unequal number of failures within the subjects, then K is the maximum. We let Tik = min(Yik,Cik ) where Yik = time of the failure in the ith subject for the kth type of failure and Cik = potential censoring time of the ith subject for the kth type of failure with i = 1,†¦,n and k = 1,†¦,K. Then dik = I (Yik ≠¤ Cik) is the indicator for failure and the vector of covariates is denoted Zik = (Z1ik,†¦, Zpik)T. To partition the data, we consider the hazard model for the ith unit for the kth type of failure, using the distinguishable baseline hazard as described by Wei et al. (1989), namely where the indicator function I(Zik Parameter b is estimated by maximizing the partial likelihood. If the observations within the same unit are independent, the partial likelihood functions for b for the distinguishable baseline model (10) would be, (11) Since the observations within the same unit are not independent for multivariate failure time, we refer to the above functions as the pseudo-partial likelihood. The estimator can be obtained by maximizing the likelihood by solving . Wei et al. (1989) showed that is normally distributed with mean 0. However the usual estimate, a-1(b), for the variance of , where (12) is not valid. We refer to a-1(b) as the naà ¯ve estimator. Wei et al. (1989) showed that the correct estimated (robust) variance estimator of is (13) where b(b) is weight and d(b) is often referred to as the robust or sandwich variance estimator. Hence, the robust Wald statistic corresponding to the null hypothesis H0 : b = 0 is (14) 4.1.2 Tree growing To grow a tree, the robust Wald statistic is evaluated for every possible binary split of the predictor space Z. The split, s, could be of several forms: splits on a single covariate, splits on linear combinations of predictors, and boolean combination of splits. The simplest form of split relates to only one covariate, where the split depends on the type of covariate whether it is ordered or nominal covariate. The â€Å"best split† is defined to be the one corresponding to the maximum robust Wald statistic. Subsequently the data are divided into two groups according to the best split. Apply this splitting scheme recursively to the learning sample until the predictor space is partitioned into many regions. There will be no further partition to a node when any of the following occurs: The node contains less than, say 10 or 20, subjects, if the overall sample size is large enough to permit this. We suggest using a larger minimum node size than used in CART where the default value is 5; All the observed times in the subset are censored, which results in unavailability of the robust Wald statistic for any split; All the subjects have identical covariate vectors. Or the node has only complete observations with identical survival times. In these situations, the node is considered as pure. The whole procedure results in a large tree, which could be used for the purpose of data structure exploration. 4.1.3 Tree pruning Let T denote either a particular tree or the set of all its nodes. Let S and denote the set of internal nodes and terminal nodes of T, respectively. Therefore, . Also let |Ãâ€"| denote the number of nodes. Let G(h) represent the maximum robust Wald statistic on a particular (internal) node h. In order to measure the performance of a tree, a split-complexity measure Ga(T) is introduced as in LeBlanc and Crowley (1993). That is, (15) where the number of internal nodes, |S|, measures complexity; G(T) measures goodness of split in T; and the complexity parameter a acts as a penalty for each additional split. Start with the large tree T0 obtained from the splitting procedure. For any internal node h of T0, i.e. h ÃŽ S0, a function g(h) is defined as (16) where Th denotes the branch with h as its root and Sh is the set of all internal nodes of Th. Then the weakest link in T0 is the node such that   <

Tuesday, August 20, 2019

Modern Approaches to Food Production

Modern Approaches to Food Production The world id currently facing huge issues such as hunger and many people are starving and are dying because of the lack of food. There is not enough food to cater for everyone. Faster food production methods need to be considered but alternative methods may be dangerous to our health. The requirements of the project were to question 15 people. These people should be from different groups who are likely to have different opinions. Sources of info Questionnaire Web search Books Personal discussions Who I surveyed Family members Friends Fathers employees 8. How do farming methods differ? 7. Why they think organic food is expensive? 6. What are the pros and cons of the two food production methods? 5. Is there a difference in taste between the two foods? 4. Do they buy organic foods? 3. Do they agree with the statement? 2. Do they understand what modern food production is? 1. Are people aware of the food shortage in the world? Modern food production method vs. organic foods What are organic foods? Pros and cons of organic production. Organic foods are naturally grown crops that are grown at a small scale. They require good nutritional soils and special care. They cannot be grown everywhere as you need to consider important factors such as weather and the enrichment of the soils. The organic foods are pesticide free making them prone to bugs and animals. For food to be considered organic it needs to come from a farm and processing plants need to be organic. For processing plants to be qualified as organic they need to be examined by government officials to ensure that they are up to USDA standards. Packaging that has the name labeled organic must have at least 95% of organic ingredients. Organic foods have plenty of benefits such as: Health benefits- create issues especially with growing children. Environmental benefits- farming methods that use chemicals are killing wildlife such as birds and insects. Organic crops balance the ecosystem. Human and animal benefits- workers and animals are not surrounded by toxins. The animals have good living conditions. Organic agriculture is a production system that sustains the  health of soils, ecosystems and people. It relies on ecological  processes, biodiversity and cycles adapted to local conditions,  rather than the use of inputs with adverse effects. Organic  agriculture combines tradition, innovation and science to benefit  the shared environment and promote fair relationships and a good quality of life for all involved -International Federation of Organic Agriculture Movements Pros Healthier Less harmful to your body No chemicals Nutritional Less damage to the environment Better quality Better taste Cons Slowly grown More effort to grow More expensive Only grown during particular seasons Shelf life not too long No guarantee of safety as they are disease free Very dependant on weather and environment Some foods not available Less production What is genetic modification? Is it the baddie that its reputation suggests? The worlds population is increasing rapidly everyday resulting in food scarcity. There is not enough food available for everyone. Traditional farming methods are too slow. This method produces food slowly and requires special care. This speed of production only feeds the wealthier half of the population as it is costly. Alternative routes have had to be established in order to feed poor countries that are suffering from hunger. A new farming method which was introduced in the1990s is faster and cheaper. Genetically modified foods are crops grown at a large scale usually in unnatural environments. These crops are not grown naturally. The method can be cost effective but is produced in bigger quantities making it more affordable. The foods have been nutritionally balanced and are not prone to diseases. Plant geneticists work with the genes found in the plants. For example a plant that has a gene that can withstand a drought is then inserted into a plant that cannot tolerate droughts. Genetically modified foods involve crossing species which could not cross in nature. Genetically modified foods have been highly criticized but they are helpful and will decrease hunger. Advantages: pest resistant herbicide resistant cold tolerance drought tolerance nutritional pharmaceuticals put in foods large amounts of production faster food can be cloned cheaper more resilient mass production more availability no diseases longer shelf life Disadvantages contain harmful chemicals chemicals used are not good for health long term affects are bad pesticides affect environment not healthy no genetic variation lower nutritional value full of preservatives The world would not last on traditional grown foods as the production is slow and countries in poverty cannot afford organic foods unless they grow the foods themselves organically. In the cow industry there is such a demand for meat that farmers are not able to produce their cattle fast enough. Beef farmers in countries like Canada have been injecting their cattle with so many growth hormones that the average cow only survives for a maximum of three years. Farmers are not only trying to supply enough food but have also become greedy because of the amount of money they receive from the meat which is often exported. Chickens that we buy in stores and are amazed at the size of the packaged meat have spent their whole live in a chicken shed. In this shed the chicken are packed and can hardly move. They are fad buckets of food everyday and at night time the lights inside the shed are not turned off so the animals think that it is still day time and therefore carry on eating. This situation is commonly known amongst chicken farms that supply meat to fast food industries like KFC. There is such a huge demand for chicken by consumers that those birds are unfairly treated. They are so full of hormones that some do not have legs or wings. Yet without this method of production KFC could never cater for all their consumers. Issues concerning human health: allergen city gene transfer out crossing effects on environment Genetically modifying food is a faster and more effective production technique. The main focus of genetically modified farming is to create the biggest capital possible. What chemicals are used to aid the production and supply of foods and what functions do they perform? Chemicals put in food have become a huge concern worldwide and is affecting international trade. Contamination involves the existence of various chemicals in foods such as pesticides, animal drugs and other agricultural chemicals. Foods manufactured that contain all these additives are seriously dangerous for your health and can cause future problems we are not yet aware of. What is radurisation, what foods are irradiated, pros and cons? Another factor that concerns consumers is radurization. This is the application of chemicals to enhance the shelf life of food. This happens by minimizing the number of microorganisms that appear when food is mishandled. Foods that are irradiated are foods that are perishable such as fruit and frozen foods. Food suppliers rate radurisation highly and state that the foods are safe to eat. Examples of Foods that are irradiated spices fruits meats Pros food is safer to eat longer life of food in stores kills insects delays ripening of fruits preserves nutrients Analyses of questionnaire answers: Are you aware of the food shortage the world is currently facing? This result was surprising as there has been such a huge issue on the shortage of food. World hunger is spoken of world wide. Do you understand what modern food production is? Only two people are unaware of what modern food production is. This could be that they are uneducated about the situation or take no interest in the situation. Do you agree with the statement Without modern food production methods, the world food shortage would be in even more of a crisis today.? 3 people out of 15 believed that people could make more of an effort to grow organic foods on their own and not depend to modern food production methods to end world hunger. They said that people are getting lazy and by this they are destroying the planet because if them. Do you buy organic foods? Nearly half of the people interviewed do not buy organic foods because of the price and limited availability of it. Is there a difference in taste between organic foods and genetically modified foods? 8 out of the 15 interviewed said that there was no taste. People do not usually pay so much attention to slight taste differences in foods. What are the pros and cons of these two food production methods? This was an open ended question and everyones answers differentiated. Why do you think organic foods are more expensive than genetically modified foods? Majority of the people interviewed had similar answers such as: Longer to grow Less quantity More care Less availability How do you think the farming methods of organically produced foods and genetically modified foods differ? This was an opinioned answer and people had similar views such as: organic farming does not use chemicals and genetically modified does. Did I get the results that I expected or not? I expected to get the results from the questionnaires as many of the school pupils who answered this questionnaire have learnt about genetically modified foods. My father owns an agricultural business so other members of my survey who work for my father know about the food shortage and other things like chemicals and organic foods as they study them on courses. Another member of the 15 people questioned has a passion for the environment so I knew the answers would be accurate. All answers were accurate and similar to literature research. Majority of the people had an idea of what the questions were about. The survey results were reliable as I compared the answers to web search. I feel different however. How I could improve the project? I should have interviewed more people and a variety of people. I should have asked better questions so it could have helped with me project answers. I should have started the project soon so that I had more time. Conclusion Looking at my information and the opinion of others I believe that the world is extremely dependant of modern food production methods. Although huge critism has been placed on genetically modified foods, the world could not go on without it. Organic food production is to slow and uses up to much effort to try and feed billions of people but there is, however, enough space and resources to grow our own food although it will take time and there will no availability in the stores or at home. Countries that have food issues are normal badly run and have big issues such as political issues. Some countries are not resourceful enough to grow their own food. The world is growing rapidly every day so an alternative route of food production needs to be taken. As peoples incomes increase so do the demand for better quality and more foods increase. In countries like China more people arte earning better salaries and are turning from vegetarian meals to meat. This is costly and food cannot be cat ered for the whole of china let alone the whole world. Only the richer population eats regular meal s as food is unavailable for countries like Africa. Bibliography for pictures http://www.google.co.za/imgres?imgurl=http://fileserver.tinker.com/tinker/events/7/7293_main_image_1248795170.jpgimgrefurl=http://tinker.com/events/%3Fperiod%3Dtoday%26category%3Dcause_topics%26featured%3Dtrue%26mode%3Dtop%26sort%3Ductusg=__LFeGwYQo56OQdl9KjGqHaJRHkSs=h=400w=400sz=23hl=enstart=0tbnid=R6SCE9t7ynbVuM:tbnh=135tbnw=135prev=/images%3Fq%3Dorganic%2Bfoods%26hl%3Den%26biw%3D1020%26bih%3D583%26gbv%3D2%26tbs%3Disch:1itbs=1iact=rcdur=125ei=YJVhTKSZOsiS4gbM9sHgCQoei=YJVhTKSZOsiS4gbM9sHgCQesq=1page=1ndsp=15ved=1t:429,r:1,s:0tx=72ty=63 http://www.google.co.za/imgres?imgurl=http://www.daybydaynutrition.com/wp-content/uploads/green-basics-organic-produce-stand.jpgimgrefurl=http://www.daybydaynutrition.com/author/admin/page/6/usg=__V970oXt-v63mUfwaFhMAvqzQrEk=h=347w=468sz=44hl=enstart=0tbnid=-N96QxsNgmQa9M:tbnh=135tbnw=212prev=/images%3Fq%3Dorganic%2Bfoods%26hl%3Den%26biw%3D1020%26bih%3D583%26gbv%3D2%26tbs%3Disch:1itbs=1iact=hcvpx=432vpy=110dur=1735hovh=193hovw=261tx=129ty=107ei=YJVhTKSZOsiS4gbM9sHgCQoei=YJVhTKSZOsiS4gbM9sHgCQesq=1page=1ndsp=15ved=1t:429,r:2,s:0 http://www.foodmatters.tv/images/assets/organic-gardening.jpg http://admin.moguling.com/Upload/180people.com/modded.jpg http://img.search.com/thumb/8/80/Waste_not_want_not_WWI_poster.jpg/200px-Waste_not_want_not_WWI_poster.jpg http://www.cartoonstock.com/newscartoons/cartoonists/rma/lowres/rman1855l.jpg http://www.google.co.za/imgres?imgurl=http://www.formenteraweb.com/portal/img/noticies/4444/agricultura-2.jpgimgrefurl=http://www.formenteraweb.com/news/2005/09/increase-organic-farming/usg=__Zfl_rH0fe-SwsnY3N6D3SfPOufQ=h=320w=300sz=73hl=enstart=16tbnid=iqOc-IMtjX4ZCM:tbnh=130tbnw=122prev=/images%3Fq%3Dorganic%2Bfarming%26hl%3Den%26gbv%3D2%26biw%3D1004%26bih%3D583%26tbs%3Disch:10%2C612itbs=1iact=hcvpx=119vpy=85dur=16hovh=232hovw=217tx=122ty=111ei=FpdhTN6KMISUONuapL8Koei=zJZhTM3VDM754AbwlIj3CQesq=2page=2ndsp=15ved=1t:429,r:5,s:16biw=1004bih=583