Wednesday, July 31, 2019

Decision to breastfeed is a very personal Essay

Breastfeeding topic always elicits strong opinions from family and friends. What matters is the infant getting proper nutrition for his optimal growth and development. The American college of Obstetricians and Gynecologist and American Academy of Pediatrics, to great emphasis on importance of breastfeeding. Every infant and mother is unique and has different challenges. Breast milk provides complete nutrition for infants. It has the perfect combination of protein, vitamins, fats and everything infants needs for its growth and development. Breast milk also contains antibodies that help infants fight off bacteria and viruses. Risk of having allergies and asthma are greatly reduced with breastfeeding. Infants who are exclusively breastfed for the first six months, without any formula tend to have fewer respiratory illnesses, ear infection and bouts of diarrhea. These infants also have fewer trips to the doctor and hospitalizations. Breastfeeding also results in higher IQ scores in some studies. The physical touch, closeness, skin-to-skin touch and eye contact, helps infants to bond with the mother and feel secure. Breastfeeding also linked to health weight gain in infants and fight childhood obesity. As per American Academy of Pediatrics, sudden infant death syndrome, risk of diabetes, obesity and certain cancers can be prevented by breastfeeding. Educating the new mother regarding breastfeeding is imperative, to make sure proper nutriment of the infant. Breastfeeding education starts with finding out the mother’s current knowledge and perception towards breastfeeding. Once finding out the mother perception and knowledge about breastfeeding. After educating the mother regarding benefits of breast feeding to infants. Nurses should first have to address the most common concerns of new mothers like: * Weight gain – Breastfeeding burns extra calories and helps lose pregnancy weight faster. It releases the hormone oxytocin, which helps return uterus to its original size and reduces uterine bleeding. * Expenses – Breastfeeding can help save money by not needing to buy formula, rubber nipples and other formula related things. * Sore nipples – It is normal to have sore nipples. Make sure baby latches on correctly and use one finger to break the suction of your baby’s mouth after each feeding. Holding ice or frozen bags of peas against sore nipples can also help ease discomfort. * Not producing enough milk – A general rule of thumb is infants wetting six to eight diapers a day is getting enough milk. Breast size does not have to do anything with milk production. Plenty of sleep, good nutrition and proper hydration helps body to produce more milk * Storing and pumping milk – Milk can be expressed by hand or pumped with a breast pump. Breast milk can be safely can be used with 2 days if stored in refrigerator. Frozen breast milk can be stored for 3 months. Thaw frozen milk in warm water or refrigerator. Do not use microwave oven to heat breast milk. * Breast engorgement – Breast engorgement is healthy and natural. It happens when breast become full of milk, it could also mean blood vessels in breast have become congested, difference between two is, in normal breast stays soft and pliable. * Mastitis – Is an infection of the breast caused by bacteria which enters the breast through a cracked nipple after breastfeeding. Antibiotics are usually needed to clear up a breast infection. Call the doctor if flu-like symptoms, fever and fatigue are noticed. * Stress – Feeling overwhelmed during breastfeeding is normal. Being overly stressed or anxious can interfere with your let down reflex, that’s body natural release of milk into milk ducts. Staying as calm and relaxed as possible before and during nursing can help milk let down and flow more easily, that in turns helps infant to be calm and relaxed and increases emotional bonding.

Tuesday, July 30, 2019

Pepsi & Coke: Related to Game Theory Essay

In May, 1886, Coca Cola was introduced by John Pemberton a pharmacist from Atlanta, Georgia. John Pemberton started brewing his coca cola formula in a three legged brass kettle in his backyard. Pharmacists Caleb Bradham in New Bern, North Carolina first made competitor Pepsi in the 1890’s. The brand was trademarked on June 16, 1903. These companies have brand identification and customer loyalties that have made them a historical landmark. Today Pepsi and Coke control around 90% of the soft drink market, making it one of the most well known oligopolies in the U.  S. An oligopoly is a market dominated by so few sellers that an action by any of them will impact both the price of the good and the competitors. Some characteristics of an oligopoly are: * The dominant firms have significant barriers to entry; or exit is difficult. * Access to information is limited * The dominant firms have significant market power; they set their own price. * The product may be homogenous or differentiated. * A few large firms dominate the market, i. e. they have a substantial market share. There is a mutual interdependence among the dominant firms; this means that competition is personal and each firm recognizes that it’s actions affects the rival firms and theirs affects it. Economies of scale deter entry by forcing the entrant to come in at a large scale and risk strong reaction from existing firms or come in at a small scale and accept a cost disadvantage. Barriers to entry are high in the soft drink industry because both soft drink companies and bottlers are factors in entering this market. These two parts of the industry are extremely interdependent, sharing costs in procurement, production, marketing and distribution. Many of their functions overlap; for instance, Pepsi can do some bottling, and bottlers conduct many promotional activities. The industry is already vertically integrated to some extent. They also deal with similar suppliers and buyers. Entry into the industry would involve developing operations in either or both market segments. Beverage substitutes would threaten both Peps and their associated bottlers. Because of operational overlap and similarities in their market environment, we can include Pepsi, Coke and bottlers in our definition of the soft drink industry. This industry as a whole generates positive economic profits. Pepsi and Coca-Cola are dominant firms in this market, controlling approximately 90% of the market share. There is also a mutual interdependence among the dominant firms, so for every change Pepsi makes in marketing strategies, price increase and/or brand expansion, Coke is affected by it. Figure 1 shows the demand curve. The point of the kink is the point of the established market price. The kink of the demand curve suggests that a competitor would react asymmetrically to price increases and price decreases by the firm. Taking a look at the soft drink market, where Pepsi and Coke combined have over 90% of the market share. Suppose the price is established at $1. 99 for a six-pack of either Pepsi or Coke. Let’s consider the demand curve for Pepsi. If Pepsi increases its price to $2. 49 per six-pack, it will lose some of its market to Coke along the AB component of the demand curve in Fig. 1. Pepsi will be able to sell 500 six-packs a day instead of the original sales level of 1000. Coke is likely to stay at $1. 99 and enjoy the additional sale, as some people who were originally buying Pepsi will be switching to Coke. Figure [ 1 ] If Pepsi lowers its price to $1. 49 to gain an advantage over Coke and increase it sales to 1500 six-packs, it may not succeed. The increase in sales by Pepsi to 1500 can only happen if Coke did not react to Pepsi’s price cut. However, Coke is likely to match the price reduction by Pepsi to protect itself against loss of market share. As the result of price cuts by both Pepsi and Coke, there will be an increase in sales by both, at least partially at the expense of smaller competitors. In our example, the sales of Pepsi increase to 1300 six-packs per day from the original 1000. This is along the BC segment of the demand curve. Therefore, there are two demand curves facing Pepsi, AB for price increases and no reaction by Coke, and BC for price decreases and price matching reaction by Coke. This explains the kinked demand curve for Pepsi and similarly for Coke. Notice that the kink in the demand curve is at the established market price. It is also important to realize that the established price tends to be maintained. Neither Pepsi nor Coke will be inclined to raise their price since it would cause loss of sales and market share to the rival. Also neither of them is particularly interested in lowering the price and starting a price war since the outcome is loss of profit for both in favor of consumers. Figure 2 shows us profit maximization under an oligopoly. If we add to the demand MR model the cost curves for a firm such as Coke and Pepsi under oligopoly, we would be able to determine the profit maximization level of output. Figure [ 2 ] The profit maximizing level of output is 1000 six-packs of Pepsi, where MC = MR. Pepsi can sell this quantity at $1. 99 according to the demand curve. The average total cost of production at 1000 level of output is $0. 99 per six-pack. Therefore the company is making $1000 a day of excess profit as illustrated in figure 2. Moderate changes in the cost conditions of oligopolies do not cause a change in their profit maximization quantity and price as long as they are in the vertical range of the MR curve. This implies that technological improvements that lower the cost of production or change in the price of inputs encountered by an oligopoly would not lead to a quantity or price change. Therefore it’s suggested that under an oligopoly market prices are rigid. Firms especially avoid lowering their price from fear of igniting a price war. Instead oligopolies resort to non-price competition such as advertising. Price wars can and occasionally do occur when one of the dominant firms in the oligopoly market experiences a significant decrease in its production cost and attempts to increase its market share.

Monday, July 29, 2019

Intellectual Disabilities Research Paper Example | Topics and Well Written Essays - 1500 words

Intellectual Disabilities - Research Paper Example Moreover, intellectual disabilities affect individuals during aging. Understanding intellectual disability is critical in education to help students having this condition (Woodcock & Vialle, 2010). This paper will discuss the definition of intellectual, its characteristics and its impacts on intellectual functioning and adaptive behavior. Moreover, strategies to assist students in this disability category will be described. Defining Intellectual Disability World Health Organization describes intellectual disability as the significant reduction of the ability to comprehend new information and in learning and applying new skills. The American Association of Intellectual and Development Disability (AAIDD) explain that intellectual disability is not usually an isolated disorder. AAIDD offers a three dimensional definition of intellectual disability and this is the most widely acknowledged definition (Barrett, 2011). According to the AAIDD, intellectual disability is a disorder that begin s before one gets to the age of 18 years that is characterized by great limitation in intellectual functioning and adaptive behavior. Intellectual functioning refers to various aspects of life such as learning, reasoning, problem solving (Barrett, 2011). On the other hand, adaptive behavior touches on a range of practical and social skills in areas of self-care, communication, self-direction, health, safety leisure and work. Intellectual disability has been introduced as a replacement to mental retardation that was previously used (Jellinek, Patel & Froehle, 2002). The prevalence of intellectual disability in America is relatively high with about one in every ten families affected. However, the estimated prevalence varies based on the criteria used in diagnosis, study design and ways of ascertaining (Barrett, 2011). For instance, when intelligent quotient (IQ) is used in diagnosis, the prevalence of intellectual disability is estimated at 3 percent but when applying the AAIDD defini tion, national prevalence stand at 1 percent. Prevalence of intellectual disability is higher among males and the male to female ration is about 1.5 to 1 (Barrett, 2011). Diagnosis and Assessment of ID Assessment of intellectual disability involves a multidisciplinary team comprising of psychiatrists, pediatricians, psychologists and clinical geneticists. The assessment is usually comprehensive where intellectual ability, adaptive behavior and medical and family history of the patient is assessed (Garbutt, 2010). DSM-IV-TR offers standardized criteria used in the diagnosis of the disorder and this is used among children and adults. Intellectual is characterized by below average intellectual functioning. The characteristics of intellectual disability include the fact that disorder begins before the age of 18. DSM-IV-TR requires that all the symptoms of intellectual disability must have begun before the age of 18 (Garbutt, 2010). However, this does not limit diagnosis after 18 years. Nevertheless, children who have not reached the age of two years should not be subjected to intellectual disability diagnosis. This may however be conducted in case a child demonstrates severe symptoms related to intellectual disability for instance Down syndrome (Garbutt, 2010). The other characteristic of intellectual disability is poor adaptive functioning. Adaptive functioning is described as the effectiveness of an individual to functioning in tandem with

Sunday, July 28, 2019

MP3s, and the Music of Today Essay Example | Topics and Well Written Essays - 500 words

MP3s, and the Music of Today - Essay Example Covach’s selections are actually representative of the songs in the 2000s in terms of the diversity in musical forms, genres, and styles of the artists. The artists that were noted were distinct and unique in terms of displaying varied personal images and exudes different musical styles (for instance the songs sung by Carrie Underwood were significantly different from those sung by OutKast). Each artists (whether as solo singers or in bands) and songs have their separate patronizers and target audience who get to appreciate the style and expressions rendered by their favorite singers. These other sets of singers: Radiohead, Gogol Bordello and OutKast compose and sing songs that are also evident of songs in the 2000s in terms of being innovative in their musical prowess of integrating different styles, forms, use of innovative and creative instruments, and the manner by which they interpret their songs to cater to their respective audiences. One does not, however, know these singers in particular where it not relayed through the course. Some trends that could have been overlooked in these surveys of 2000s rock music could be the profiles of audiences: like which particular target audience, in specific demographic factors, cater to each of the identified artists. These profiles, especially age ranges, cultural orientations, gender, and ethnic background, could provide illuminating details in the past, current and future trends that are manifested by the 2000s rock music and could thereby provide some indications on how these trends could persist in the near future. One believes that there are more foreign artists (such as Korean music) that became increasingly popular in the 2000s music. This kind of musical genre and format could be included and would be interesting to evaluate. One prominent artist that contributed to that trend is Psy and K-pop music which were made famous from Korean dramas such as Boys Over Flower and Hot

Saturday, July 27, 2019

Human Right Assignment Essay Example | Topics and Well Written Essays - 750 words

Human Right Assignment - Essay Example The paper outlines and discusses the different ways in which the death penalty is a limitation on human rights. It also explores the differences between International Covenant on Civil and Political Rights and the Inter-American Convention on Human Rights. ICCPR is yet to prohibit the death penalty but does not entirely give access to the different countries to do so at will. It defines the human life and outline measures to follow before condemning a person to this sentence. Article six in the covenant states that all human beings have the right to life and nobody should deny this right. This penalty can only apply when a severe crime occurs, and the offender declared guilty by a very competent court of justice. Upon the conviction a person to death, the article provides that a person can still seek pardon and amnesty from the government. Lastly, in case of any other violation of ICCPR right, then it means that the imposition of the death penalty is invalid. Other limitations include the exemption of pregnant women from execution and any other person charged with an offense committed when the person was under the age of eighteen. The death penalty also infringes on other human right. Indeed, convicts face other forms of inhuman treatment before facing execution. For instance, convicts can face torture or exposure to a lot of pain before execution. Most of the countries that still execute offenders have delinked death penalty from human right so as the penalty to be seen as any other form of punishment and to avoid public criticism (Schabas, 2008). ICCPR, Inter-American Convention on Human Rights, and the Arab charter all perform the same function. They limit what the government can do to people within its jurisdiction. The different organizations came into force after 1945 as a way of responding to the violations of the human rights during and before the

The Analysis of a Joke Essay Example | Topics and Well Written Essays - 1000 words

The Analysis of a Joke - Essay Example Although we think of the joke as a cultural constant, it is a form of humor that comes and goes with the rise and fall of civilizations.†1 The joke that was chosen was the following: A doctor walked into a bank. Preparing to endorse a check, he pulled a rectal thermometer out of his shirt pocket and tried to write with it. Realizing his mistake, he looked at the thermometer with annoyance and said, ‘Well thats great, just great... some asshole’s got my pen.’2 The category that this particular joke probably belongs in is the scatological category, because it deals with a reference to the rectum. It’s unsure why, but these types of jokes can be particularly compelling—because excrement seems to be something that humans find very funny. Of course, a simpler way to say that is to just say, â€Å"Poo is funny.† But why? What is so funny about our own feces? Fundamentally, excrement is elemental. If we didn’t have it, there would be no jokes. But why is humor about feces, farts, and, in fact—the entire range of human bodily functions, fodder for jokes? One must wonder. What makes this particular joke funny is that, through a play on words, we imagine the pen being stuck in some patient’s behind. That seems pretty funny that there would be a mix-up like that. Thus, there is a play on words and we find this joke, for the most part, funny—if not at least a bit crude. ... Now, it’s not using very polite language either. This is where the aggressive element demonstrates itself. It’s not a polite joke, and it probably wouldn’t be polite to share in mixed company, unless the mixed company were to be as foul-mouthed as the language used in the joke. Although the joke’s language is not overly offensive, it does say something about the medical profession as well. Doctors are sometimes inept, and it’s easy to make jokes about doctors and lawyers because they both have high-stress professions. Humor can be a wonderful way to deflect problems, as well as provide a platform for expressing one’s personality. This is why comedians like Jerry Seinfeld did especially well with his show Seinfeld, and why, subsequently, comedians like Larry David did so well with his show Curb Your Enthusiasm. Many times, these humorous shows have something in common—they use real-life situations as fodder for something called situatio nal comedy (or a sitcom). Situational comedies bring real-life problems to light. Who could ever forget the following bits: â€Å"Are you sponge-worthy?† â€Å"She’s got man hands!† Pig Man. The Soup Nazi. â€Å"You double-dipped the chip.† â€Å"Serenity now!† â€Å"Give me that, you old bag!† Who will ever forget these classic moments in Seinfeld history? These, and a series of other vignettes in his subsequent spin-off hit HBO comedy series--Curb Your Enthusiasm--were brought to you by none other than comedian, writer, actor, and executive producer Larry David. Larry David was the head writer and executive producer of Seinfeld, winning him a Primetime Emmy Award for Best Outstanding comedy series in the run of the show’s fourth year. Seinfeld made

Friday, July 26, 2019

Bacteria Essay Example | Topics and Well Written Essays - 1250 words

Bacteria - Essay Example Once there they physically change. They become smaller in size and lose their flagella and begin to give off a natural glow. There are a number of interesting aspects to this particular bacterium, which includes, symbiotic living, a special relationship with certain species of cephalopods, animals that can actually glow in the dark, and preserving nature’s polluted waters. Again this bacterium is often found inside and one fish and certain species of cephalopods, like octopus and squid (OBrien). However, they are not harmful once inside of another species, in fact, the relationship is actually quite beneficial for both species. The Vibrio fischeri rely on the fish for a protective environment and the bacteria create a very special reaction that is incredibly beneficial to the aquatic species. There are five genes then when active and through the process of oxidation takes place in the system of certain species it causes the host animal to literally glow in the dark (Maiden). Despite how unusual and strange that may sound it is absolutely true. In truth 90% of most fish and sea life carry some amount of these bacteria in their systems or on their bodies, however, some creatures glow brighter than others (Widder).However, it is these bacteria that have been attributed to instances when processing of fish products have on occasion resulted in slightly glowing fish sticks (Maiden).However, one species in particular is the prime example of this phenomena the bobtail-squid, native to the waters of Hawaii, have colonies of these bacteria living on their undersides. Because the squid possesses an organ, called the light organ, which is a unique structure similar to the make-up of an eye, possessing both an iris and a lens, yet it allows the squid to produce a glowing light. To predators looking up at the squid it appears to be

Thursday, July 25, 2019

Numerical analysis Math Problem Example | Topics and Well Written Essays - 1500 words

Numerical analysis - Math Problem Example This method of numerical integration finds solutions in the form of resultant solutions. The equation given in the task was solved by Mathcad program using program module which allows to solve differential equations with fixed step: F := rkfixed(Z0, t0, tk, N, f). The result of the solution of this equation in mathcad is the following: In order to evaluate these results we can solve the same equation using conventional means. As it's shown this equation is solved by the method of variables separation. After finding the function we should plug the values of t into this function and find it's values for all values of t on the interval [0,1]. As we can see the results of numerical solution of Runge Kutta method are very close to the real results of this function. Using error evaluation method: Absolute value (real value of function- approximated value)/ real value of function) we will get the following results: Real values: 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 1.005013 1.020201 1.046028 1.083287 1.133148 1.197217 1.277621 1.377128 1.499303 1.648721 Errors: 0 1.25E-05 0.000197 2.66E-05 0.000265 0.000131 0.000182 0.0003 9.28E-05 0.000202 0.00017 As we can see the results are very reliable as error is less than 0.001% for all values

Wednesday, July 24, 2019

Communications Technology Essay Example | Topics and Well Written Essays - 2250 words

Communications Technology - Essay Example No way was I going to pedal over some of those hills though: o Lucky for me the bus drivers were happy to put my bike on a carrier on the front of the bus and drive me to a destination where I could pedal around exploring. Beautiful country...too bad about the tourism dollars going out of the country - but that's another blogg in itself. In 2004 I spent three weeks in Palau for the 9th Pacific Arts Festival - that was awesome! So many cultures, so little time. I plan on traveling the Eastern Coastline of Australia in 2007 - the web pics are breathtaking, and I want to learn to scuba dive. I am also interested in robotics, and have built myself a little bot I've named Nox (yes I'm a Gater [Stargate fan]). I enjoy building things and I enjoy watching Nox learn things - like how to find his way through a maze (I made one out of cardboard) and how to find his way through my apartment (lots of trial and error and the occasional broken glass). At the moment I am building a little sister for Nox, her name is Major Carter (yep, Stargate again!). But trying to build around my studies can be difficult, as all I seem to do is study (_*) I became interested in communications technology at high school. I firmly believe it is necessary for humans to have effective communications: interpersonally, nationally and globally. I think it important also that each person be able to critically analyze and reflect on information that is communicated to us and that each person have access to information, so that we can make informed decisions, and so that institutions, corporations and individuals can be accountable for their actions. It helps that the CT industry is growing rapidly, as that means I more likely to obtain gainful employment - in a job I enjoy! Also, the dynamic nature of the internet will allow me to be innovative and expressive in a variety of ways that suit my character - color, FUN, interaction, connections with like minded individuals and businesses etc. At this point, web programming is a definite interest. I will be able to design sites for others, as well as develop my own about topics which are important to me. Due to current time constraints ; ) I can begin a soapbox from my blogg, for dissemination on an important CT topic: ethics!!! Ethics brings to mind for many people, including students, including myself - the word boringit conjures up images of dry, drab and complicated documents bound in dusty covers, sectioned away in the far corners of libraries. It is unfortunate that we, and I emphasize the collective "we", do not take more of an interest in our rights as citizens of this globe. For that is what ethics is - guidelines for maintaining our freedoms as a human being, our dignity, our privacy, and our accountability to the global community we are a part of. With a medium such as the internet ethics becomes further complicated and even more important - at least I think so. The internet is an unmediated communication environment. This very blogg that you read now has been created by a person whom you do not know, are likely never to meet, and who is going to say a great (and they will be great : ) many things about people, and government, and business - and maybe even you. How do you feel about that I know that for myself I have several netiquette concerns (etiquette for the www). As I write this blogg I feel a pressing need to abscond to the loo (sorry

Tuesday, July 23, 2019

Auditing theory and practice Essay Example | Topics and Well Written Essays - 2000 words

Auditing theory and practice - Essay Example fy these weaknesses or considered as risks associated with these financial statements, we perform comparative year-on-year and ratio analyses which may be effective for us to identify possible problem areas for additional analysis and audit testing and for which we can provide other assistance. Among other: The company’s short-term debt paying ability. We analyse the company’s liquidity status, such as its current ratio is 1.28 (lower than 1.31 in 2004) but it indicates that the company should have sufficient fund to pay its short-term debts. Our calculation also indicates that the company will be able collect the amount owed by its customers, except the average day of collecting payment of 82 days (15 days longer than the previous year). This may have an affect on negative cash flow. Since the company does not have sufficient cash to meet its short-term obligations, the company may consider lengthening the time it takes to convert less liquid assets into cash. Short-term liquidity. The company’s balance sheet shows that it has negative cash balance. It is likely that the company or that the record shows that the company cannot meet its obligation. Therefore, its debt-paying ability would be the length of time it takes the company to convert its current assets into cash. The company’s balance sheet indicates a huge amount of bank overdraft. In case of necessary, the company has 2.5 times cash turnover rate (down by .89), 4.36 times of receivables turnover rate (down by .94), can recover the value of its fixed assets 51.17 (down from 56.93), and over all, 2.5 times (down from 3.2) chance to convert its assets into cash to cover bank over draft. With the absence of inventory, it may be possible that the company is having inventory obsolescence problem. Ability to meet long-term debt obligations. The company’s debt-to-equity ratio is 3.35, down from 3.97 in 2004. There is a possibility that the company would be able to raise fund through borrowing.

Monday, July 22, 2019

Learning theories Essay Example for Free

Learning theories Essay Primary research consists of the collection of original primary data. It is often undertaken after the researcher has gained some insight into the issue by reviewing secondary research or by analyzing previously collected primary data. It can be accomplished through various methods, including questionnaires and telephone interviews in market research, or experiments and direct observations in the physical sciences, amongst others. Secondary Research: Secondary research (also known as desk research) involves the summary, collation and/or synthesis of existing research rather than primary research, where data is collected from, for example, research subjects or experiments. The term is widely used in medical research and in market research. The principal methodology in medical secondary research is the systematic review, commonly using meta-analytic statistical techniques, although other methods of synthesis, like realist reviews and meta-narrative[2] reviews, have been developed in recent years. Such secondary research uses the primary research of others typically in the form of research publications and reports. In a market research context, secondary research is taken to include the re-use by a second party of any data collected by a first party or parties. In archaeology and landscape history, desk research is contrasted with fieldwork. Primary Research Vs Secondary Research One of the major differences between the two is that primary research is conducted with the help of primary sources available where as secondary research is conducted on the basis of some data collected from someone who had got it from the sources. Primary research is expensive to conduct since it involves primary sources. But secondary research is not much expensive as primary. Another major difference between the two is that primary research is much more time consuming as compared to secondary research. As a matter of fact the results found by the primary research are usually to have better quality than those from the conduct of the secondary research. Primary research is also usually detailed and elaborated since it is supposed to be both qualitative as well as quantitative. On the other hand data pertaining to secondary research is usually not much detailed and elaborated since it involves indirect uses. Primary research is done with a lot of hard work and dedication. On the hand secondary research is usually presented with a number of data and records. These are usually taken from books, periodicals published by governmental organizations, statistical data, annual reports and case study ORGANIZATION BEHAVIOUR. Organizational behavior is a field of study that investigates the impact that individuals, groups and structures have on behavior within an organization for the purpose of applying such knowledge towards improving an organizations effectiveness. It is an interdisciplinary field that includes sociology, psychology, communication, and management; and it complements the academic studies of organizational theory (which is focused on organizational and intra-organizational topics) and human resource studies (which is more applied and business-oriented). It may also be referred to as organizational studies or organizational science. The field has its roots in industrial and organizational psychology. 1 Organizational studies encompass the study of organizations from multiple viewpoints, methods, and levels of analysis. For instance, one textbook divides these multiple viewpoints into three perspectives: modern, symbolic, and postmodern. Another traditional distinction, present especially in  American academia, is between the study of micro organizational behaviour — which refers to individual and group dynamics in an organizational setting — and macro strategic management and organizational theory which studies whole organizations and industries, how they adapt, and the strategies, structures and contingencies that guide them. To this distinction, some scholars have added an interest in meso scale structures power, culture, and the networks of individuals and i. e. ronit units in organizations — and field level analysis which study how whole populations of organizations interact. Whenever people interact in organizations, many factors come into play. Modern organizational studies attempt to understand and model these factors. Like all modernist social sciences, organizational studies seek to control, predict, and explain. There is some controversy over the ethics of controlling workers behavior, as well as the manner in which workers are treated (see Taylors scientific management approach compared to the human relations movement of the 1940s). As such, organizational behaviour or OB (and its cousin, Industrial psychology) have at times been accused of being the scientific tool of the powerful. Those accusations notwithstanding, OB can play a major role in organizational development, enhancing organizational performance, as well as individual and group performance/satisfaction/commitment. One of the main goals of organizational theorists is, according to Simms (1994), to revitalize organizational theory and develop a better conceptualization of organizational life. † An organizational theorist should carefully consider levels assumptions being made in theory, and is concerned to help managers and administrators. 1. INTRODUCTION TO LEARNING. The process of learning has great value for enriching human life in all spheres of life. All activities and behaviors that make personal, social and economic life peaceful and pleasurable are learned. Learning definitely affects human behaviour in organizations. There is little organizational behaviour that is not either directly or indirectly affected by learning. For example, a workers skill, a managers attitude, a supervisors motivation and a secretarys mode of dress are all learned. Our ability to learn is also important to organizations preoccupied with controlled  performance. Employees have to know what they are to do, how they are to do it, how well they are expected to do it, and the consequences of achieving good or poor levels of performance. Thus, learning theories have influenced a range of organizational practices concerning: 1. The induction of new recruits 2. The design and delivery of job training 3. The design of payment systems- 4. How supervisors evaluate and provide feedback on employee performance 5. The design of forms of learning organization The concept of the learning organization became popular during the 1990s. The learning organization is a configuration of structures and policies which encourage individual learning, with individual and organizational benefits. The organization itself can also be regarded as an entity which is capable of learning independently of its members. Knowledge has thus become a more important asset for many organizations than materials and products. 1. 1 WHAT IS LEARNING Learning covers virtually all behaviors and is concerned with the acquisition of knowledge, attitudes and values, emotional responses (such as happiness and fear), and motor skills (such as operating a computer keyboard or riding a bicycle). We can learn incorrect facts or pick up bad habits in the same way that we learn correct facts and acquire good habits. It refers to a spectrum of changes that occur as a result of ones experience. Learning may be defined as any relatively permanent change in behaviour or behavioral potential produced by experience. It may be noted here that some behavioral changes take place due to the use of drugs, alcohol, or fatigue. Such changes are temporary. They are not considered learning. Therefore, changes are due to practice and experience, and relatively permanent, alone are illustrative of learning. In the definition given above, it is clear that the process of learning has certain distinctive characteristics. These are: First, learning always involves some kind of experience. These experiences may be derived from inside the body or they may be sensory, arising outside. The task of inferring whether or not learning has taken place may be an obvious one, but observable behaviour may not always reveal learning. It is important to distinguish between two types of learning. Procedural learning or knowing how, concerns your ability to carry out particular skilled actions such as riding a horse. Declarative learning or `knowing that, concerns your store of factual knowledge such as an understanding of the history of our use of the horse. Second, the behavioral changes that take place due to learning are relatively permanent. Behaviour can be changed temporarily by many other factors and in ways which we would not like to call learning. These other factors include growing up or maturation (in children), aging (in adults), drugs, alcohol and fatigue. For example, you must have noticed that whenever one takes a sedative or drug or alcohol, ones behaviour changes. Each one of these drugs affect physiological functions leading to certain changes in behaviour. Such changes are temporary in nature and disappear as the effect of drugs wears out. Third, learning cannot be observed directly. We can only observe a persons behaviour and draw the inference from it that learning has taken place. A distinction has to be made between learning and performance. Performance is evaluated by some quantitative and some qualitative measures of output. For example, the number of calls a sales representative makes to customers or the quality of a managers chairing of a committee meeting. But, learning acts as a constraint on the outcome. Normally, we cannot perform any better than we have learned, though there are occasions when the right motivational disposition and a supportive environment help to raise the level of performance. Researchers have found that increased motivation may improve our performance up to a point but, beyond this, increased motivation may cause a lowering of the level of performance. 2. PRECONDITIONS FOR LEARNING Two preconditions for learning will increase the success of those who are to participate in such programs: employee readiness and motivation. The condition known as employee readiness refers to both maturational and experiential factors in the employee’s background. Prospective employees should be screened to determine that they have the background knowledge or the skills necessary for learning what will be presented to them. Recognition of individual differences in readiness is as important in an organization as it is in any other learning situation. It is often desirable to group individuals according to their capacity to learn, as determined by scores from tests, or to provide a different or extended type of instruction for those who need it. The other precondition for learning is that the employee be properly motivated. That is, for optimum learning the employee must recognize the need for acquiring new information or for having new skills; and a desire to learn as learning progresses must be maintained. While people at work are motivated by certain common needs, they differ from one another in the relative importance of these needs at any given time. For example, new recruits often have an intense desire for advancement, and have established specific goals for career progression. Objectives that are clearly defined will produce increased motivation in the learning process when instructional objectives are related to individual needs. 3. SOME PREREQUISITES FOR LEARNING After employees have been placed in the learning situation, their readiness and motivation should be assessed further. In addition, facilitators should understand the basic learning issues discussed below. 3. 1 MEANINGFUL MATERIALS In accordance with adult learning theories, the material to be learned should be organized in as meaningful a manner as possible. It should be arranged so that each successive experience builds upon preceding ones so that the employee is able to integrate the experiences into a useable pattern of knowledge and skills. The material should have face validity. 3. 2 REINFORCEMENT Anything which strengthens the employee’s response is called reinforcement. It may be in the form of approval from the instructor or facilitator or the feeling of accomplishment that follows the performance; or it may simply be confirmation by a software program that the employee’s response was correct. It is generally most effective if it occurs immediately after a task has been performed. Behaviour modification, or a technique that operates on the principle that behaviour that is rewarded positively (reinforced) will be exhibited more frequently in the future, whereas behaviour that is penalized or unrewarded will decrease in frequency, is often used for such purposes 3. 3 TRANSFER OF KNOWLEDGE Unless what is learned in the development activity is applicable to what is required on the job, the effort will have been of little value. The ultimate effectiveness of learning, therefore, is to be found in the answer to the question: ‘To what extent does what is learned transfer to the job? ’ Helpful approaches include ensuring that conditions in the development program conform as closely as possible to those on the job, and coaching employees on the principles for applying to the job the behaviors which they have learned. Furthermore, once formal instruction has been completed, the supervisor must ensure that the work environment supports, reinforces and rewards the employee for applying the new skills or knowledge. 3. 4 KNOWLEDGE OF PROGRESS As an employee’s development progresses, motivation may be maintained and even increased by providing knowledge of progress. Progress, as determined by tests and other records, may be plotted on a chart, commonly referred to as a learning curve. Exhibit 8. 9 is an example of a learning curve that is common in the acquisition of many job skills. 4. PRINCIPLES OF LEARNING A. Distributed Learning: Another factor that determines the effectiveness of learning is the amount of time given to practice in one session. Should training or development be undertaken in five two-hour periods or in 10 one-hour periods? It has been found in most cases that spacing out the activities will result in more rapid learning and more permanent retention. This is the principle of distributed learning. Since the most efficient distribution will vary according to the type and complexity of the task to be learned, it is desirable to make reference to the rapidly growing body of research in this area when an answer is required for a specific learning situation. B. Whole v. Part Learning: Most jobs and tasks can be broken down into parts that lend themselves to further analysis. The analysis of the most effective manner for completing each part then provides a basis for giving specific instruction. Airline flight attendant jobs, for example, involve a combination of mechanistic (specific tasks that follow a prescribed routine), and organic (tasks that involve decision-making and individualized responses) duties, which are best learnt separately, and then combined to form the whole job responsibility. Thus, the prescribed takeoff and landing announcements, and formal safety procedures, are supplemented with separate learning activities about how to deal with difficult passengers or how to cope with food supply problems. In evaluating whole versus part learning, it is necessary to consider the nature of the task to be learned. If the task can be broken down successfully for part learning, it should probably be taught as a unit. C. Practice and Repetition: It is those things we do daily that become a part of our repertoire of skills. Employees need frequent opportunities to practice their job tasks in the manner in which they will ultimately be expected to perform them. The individual who is being taught to operate a machine should have an opportunity to practice on it. Similarly, the supervisor who is being taught how to train should have supervised practice in training D. Multiple Sense Learning: It has long been acknowledged that the use of multiple senses increases learning. Smith and Delahaye state that about 80 per cent of what a person perceives is obtained visually, 11 per cent by hearing and 9 per cent by the other senses combined. It follows that in order to maximize learning, multiple senses of the employees, particularly sight and hearing, should be engaged. Visual aids are therefore emphasized as being important to the learning and development activities. 5. THEORIES OF LEARNING OR APPROCHES TO LEARNING 1. BEHAVIORLIST APPROACH Behaviorism, as a learning theory, can be traced back to Aristotle, whose essay â€Å"Memory† focused on association being made between events such as lightning and thunder. Other philosophers that followed Aristotle’s thoughts are Hobbs (1650), Hume (1740), Brown (1820), Bain (1855) and Ebbinghause (1885) (Black, 1995). Pavlov, Watson, Thorndike and Skinner later developed the theory in more detail. Watson is the theorist credited with coining the term behaviorism. The school of adult learning theory that adopted these principles has become known as the school of behaviorism, which saw learning as a straightforward process of response to stimuli. The provision of a reward or reinforcement is believed to strengthen the response and therefore result in changes in behavior – the test, according to this school of thought, is as to whether learning had occurred. Spillane (2002) states, â€Å"the behaviorist perspective, associated with B. F. Skinner, holds that the mind at work cannot be observed, tested, or understood; thus, behaviorists are concerned with actions (behavior) as the sites of knowing, teaching, and learning†. The hypothesis behind behaviorlist learning theories is that all learning occurs when behavior is influenced and changed by external factors. Behavioralism disregards any notion that there may be an internal component to man’s learning. Grippin and Peters (1984) emphasize in regard to an individual’s subjugation to external stimulus as a determinant of response (i. e. , behavior). Contiguity is understood as the timing of events that is necessary to bring about behavioral change, while reinforcement refers to the probability that repeated positive or negative events will produce an anticipated change in behavior. 1. (A) Classical Conditioning (Pavlov) Classical conditioning is a reflexive or automatic type of learning in which a stimulus acquires the capacity to evoke a response that was originally evoked by another stimulus. Originators and Key Contributors: First described by Ivan Pavlov (1849-1936), Russian physiologist, in 1903, and studied in infants by John B. Watson (1878-1958). Several types of learning exist. The most basic form is associative learning, i. e. , making a new association between events in the environment. There are two forms of associative learning: classical conditioning (made famous by Ivan Pavlov’s experiments with dogs) and operant conditioning. Pavlov’s Dogs In the early twentieth century, Russian physiologist Ivan Pavlov did Nobel prize-winning work on digestion. While studying the role of saliva in dogs’ digestive processes, he stumbled upon a phenomenon he labeled â€Å"psychic reflexes. † While an accidental discovery, he had the foresight to see the importance of it. Pavlov’s dogs, restrained in an experimental chamber, were presented with meat powder and they had their saliva collected via a surgically implanted tube in their saliva glands. Over time, he noticed that his dogs who begin salivation before the meat powder was even presented, whether it was by the presence of the handler or merely by a clicking noise produced by the device that distributed the meat powder. Fascinated by this finding, Pavlov paired the meat powder with various stimuli such as the ringing of a bell. After the meat powder and bell (auditory stimulus) were presented together several times, the bell was used alone. Pavlov’s dogs, as predicted, responded by salivating to the sound of the bell (without the food). The bell began as a neutral stimulus (i. e. the bell itself did not produce the dogs’ salivation). However, by pairing the bell with the stimulus that did produce the salivation response, the bell was able to acquire the ability to trigger the salivation response. Pavlov therefore demonstrated how stimulus-response bonds (which some consider as the basic building blocks of learning) are formed. He dedicated much of the rest of his career further exploring this finding. In technical terms, the meat powder is considered an unconditioned stimulus (UCS) and the dog’s salivation is the unconditioned response (UCR). The bell is a neutral stimulus until the dog learns to associate the bell with food. Then the bell becomes a conditioned stimulus (CS) which produces the conditioned response (CR) of salivation after repeated pairings between the bell and food. John B. Watson: Early Classical Conditioning with Humans John B. Watson further extended Pavlov’s work and applied it to human beings. In 1921, Watson studied Albert, an 11 month old infant child. The goal of the study was to condition Albert to become afraid of a white rat by pairing the white rat with a very loud, jarring noise (UCS). At first, Albert showed no sign of fear when he was presented with rats, but once the rat was repeatedly paired with the loud noise (UCS), Albert developed a fear of rats. It could be said that the loud noise (UCS) induced fear (UCR). The implications of Watson’s experiment suggested that classical conditioning could cause some phobias in humans. 1. (B) GOMS Model (Card, Moran, Newell) The GOMS Model is a human information processing model that predicts what skilled users will do in seemingly unpredictable situations. Originators and proponents: Card, Moran and Newell in 1983; Bonnie John et al. This model is the general term for a family of human information processing techniques that attempt to model and predict user behavior. Typically used by software designers, a person’s behavior is analyzed in terms of four components: Goals – something that the person wants to accomplish. Can be high level to low level. Operators – basic perceptual, cognitive, or motor actions used to accomplish goals, or actions that the software allows user to make. Methods – procedures (sequences) of sub-goals and operators that can accomplish a goal Selection rules – personal rules users follow in deciding what method to use in a circumstance 1. (C) Operant Conditioning (Skinner) A behaviorist theory based on the fundamental idea that behaviors that are reinforced will tend to continue, while behaviors that are punished will eventually end. Originators and Key Contributors: B. F. Skinner, built upon Ivan Pavlov’s theories of classical conditioning. Operant conditioning can be described as a process that attempts to modify behavior through the use of positive and negative reinforcement. Through operant conditioning, an individual makes an association between a particular behavior and a consequence. Example 1: Parents rewarding a child’s excellent grades with candy or some other prize. Example 2: A schoolteacher awards points to those students who are the most calm and well-behaved. Students eventually realize that when they voluntarily become quieter and better behaved, that they earn more points. Example 3: A form of reinforcement (such as food) is given to an animal every time the animal (for example, a hungry lion) presses a lever. The term â€Å"operant conditioning† originated by the behaviorist B. F. Skinner, who believed that one should focus on the external, observable causes of behavior (rather than try to unpack the internal thoughts and motivations) Reinforcement comes in two forms: positive and negative. Positive and negative reinforces Positive reinforces are favorable events or outcomes that are given to the individual after the desired behavior. This may come in the form of praise, rewards, etc. Negative reinforces typically are characterized by the removal of an undesired or unpleasant outcome after the desired behavior. A response is strengthened as something considered negative is removed. The goal in both of these cases of reinforcement is for the behavior to increase. Positive and negative punishment Punishment, in contrast, is when the increase of something undesirable attempts to cause a decrease in the behavior that follows. Positive punishment is when unfavorable events or outcomes are given in order to weaken the response that follows. Negative punishment is characterized by when a favorable event or outcome is removed after a undesired behavior occurs. The goal in both of these cases of punishment is for a behavior to decrease. What is the difference between operant conditioning and classical conditioning? In operant conditioning, a voluntary response is then followed by a reinforcing stimulus. In this way, the voluntary response (e. g. studying for an exam) is more likely to be done by the individual. In contrast, classical conditioning is when a stimulus automatically triggers an involuntary response. 1. (D) Socialist Learning Theory (Bandura). Bandura’s Social Learning Theory posits that people learn from one another, via observation, imitation, and modeling. The theory has often been called a bridge between behaviorist and cognitive learning theories because it encompasses attention, memory, and motivation. Originator: Albert Bandura People learn through observing others’ behavior, attitudes, and outcomes of those behaviors. â€Å"Most human behavior is learned observationally through modeling: from observing others, one forms an idea of how new behaviors are performed, and on later occasions this coded information serves as a guide for action. † (Bandura). Social learning theory explains human behavior in terms of continuous reciprocal interaction between cognitive, behavioral, and environmental influences. Necessary conditions for effective modeling: 1. Attention — various factors increase or decrease the amount of attention paid. Includes distinctiveness, affective valence, prevalence, complexity, functional value. One’s characteristics (e. g. sensory capacities, arousal level, perceptual set, past reinforcement) affect attention. 2. Retention — remembering what you paid attention to. Includes symbolic coding, mental images, cognitive organization, symbolic rehearsal, motor rehearsal 3. Reproduction — reproducing the image. Including physical capabilities, and self-observation of reproduction. 4. Motivation — having a good reason to imitate. Includes motives such asA past (i. e. traditional behaviorism), promised (imagined incentives) and vicarious (seeing and recalling the reinforced model) Bandura believed in â€Å"reciprocal determinism†, that is, the world and a person’s behavior cause each other, while behaviorism essentially states that one’s environment causes one’s behavior, Bandura,who was studying adolescent aggression, found this too simplistic, and so in addition he suggested that behavior causes environment as well. Later, Bandura soon considered personality as an interaction between three components: the environment, behavior, and one’s psychological processes (one’s ability to entertain images in minds and language). 2. CONSTURCTIVIST Constructivism is a synthesis of multiple theories diffused into one form. It is the assimilation of both behaviorialist and cognitive ideals. The â€Å"constructivist stance maintains that learning is a process of constructing meaning; it is how people make sense of their experience†. This is a combination effect of using a person’s cognitive abilities and insight to understand their environment. This coincides especially well with current adult learning theory. This concept is easily translated into a self-directed learning style, where the individual has the ability to take in all the information and the environment of a problem and learn. Constructivism as a paradigm or worldview posits that learning is an active, constructive process. The learner is an information constructor. People actively construct or create their own subjective representations of objective reality. New information is linked to prior knowledge, thus mental representations are subjective. Originators and important contributors: Vygotsky, Piaget, Dewey, Vico, Rorty, Bruner Constructivism A reaction to didactic approaches such as behaviorism and programmed instruction, constructivism states that learning is an active, contextualized process of constructing knowledge rather than acquiring it. Knowledge is constructed based on personal experiences and hypotheses of the environment. Learners continuously test these hypotheses through social negotiation. Each person has a different interpretation and construction of knowledge process. The learner is not a blank slate (tabula rasa) but brings past experiences and cultural factors to a situation. Vygotsky’s theory is one of the foundations of constructivism. It asserts three major themes: Major themes: 1. Social interaction plays a fundamental role in the process of cognitive development. In contrast to Jean Piaget’s understanding of child development (in which development necessarily precedes learning), Vygotsky felt social learning precedes development. He states: â€Å"Every function in the child’s cultural development appears twice: first, on the social level, and later, on the individual level; first, between people (inter-psychological) and then inside the child (intra-psychological). † 2. The More Knowledgeable Other (MKO). The MKO refers to anyone who has a better understanding or a higher ability level than the learner, with respect to a particular task, process, or concept. The MKO is normally thought of as being a teacher, coach, or older adult, but the MKO could also be peers, a younger person, or even computers. 3. The Zone of Proximal Development (ZPD). The ZPD is the distance between a student’s ability to perform a task under adult guidance and/or with peer collaboration and the student’s ability solving the problem independently. According to Vygotsky, learning occurred in this zone. Vygotsky focused on the connections between people and the sociocultural context in which they act and interact in shared experiences (Crawford, 1996). According to Vygotsky, humans use tools that develop from a culture, such as speech and writing, to mediate their social environments. Initially children develop these tools to serve solely as social functions, ways to communicate needs. Vygotsky believed that the internalization of these tools led to higher thinking skills. 3. COGNITIVISM The cognitivist paradigm essentially argues that the â€Å"black box† of the mind should be opened and understood. The learner is viewed as an information processor (like a computer). Originators and important contributors: Merrill -Component Display Theory (CDT), Reigeluth (Elaboration Theory), Gagne, Briggs, Wager, Bruner (moving toward cognitive constructivism), Schank (scripts), Scandura (structural learning) The cognitivist revolution replaced behaviorism in 1960s as the dominant paradigm. Cognitivism focuses on the inner mental activities – opening the â€Å"black box† of the human mind is valuable and necessary for understanding how people learn. Mental processes such as thinking, memory, knowing, and problem-solving need to be explored. Knowledge can be seen as schema or symbolic mental constructions. Learning is defined as change in a learner’s schemata. A response to behaviorism, people are not â€Å"programmed animals† that merely respond to environmental stimuli; people are rational beings that require active participation in order to learn, and whose actions are a consequence of thinking. Changes in behavior are observed, but only as an indication of what is occurring in the learner’s head. Cognitivism uses the metaphor of the mind as computer: information comes in, is being processed, and leads to certain outcomes. 3. 1 GESTALT PSYCHOLOGY Gestalt psychology or gestaltism (German: Gestalt – essence or shape of an entitys complete form) is a theory of mind and brain of the Berlin School; the operational principle of gestalt psychology is that the brain is holistic, parallel, and analog, with self-organizing tendencies.

Humanities Paper Essay Example for Free

Humanities Paper Essay Humanities is a topic that has so many wide-ranged meaning in regards to historical literature and arts. After conducting much needed research on the topic at hand, the author will discuss their informational findings in a research paper. In this paper, the author will define the meaning of humanities, discuss a cultural event that has been experienced such as music, dance, theater, art, literature, etc. The author will then elaborate on how a particular event was an expression of what he/she knew about the humanities, art, style, genius, and culture of the time  period it represents. Finally the author will explain how the selected form of cultural expression compares with other forms he/she know about from the same time period. Hopefully after reading this paper, the audience will have a better knowledge about Humanities in the Past, Present, and Future. Humanities Humanities can be very abroad, but one of the meanings of humanities, according to the American Heritage Dictionary, is â€Å"Those branches of knowledge, such as philosophy, literature, and art, that are concerned with human thought and culture; the liberal arts† (American  Dictionary, 2000). Culture is a big part of humanities. These patterns, traits, and products are considered as the expressions of a particular period, class, community, or population (American heritage dictionary, 2000). Humanities has really helped paved the way for future endeavors such as the way we think, conduct ourselves, and the way we observe things. Select a cultural event you have experienced, such as music, dance, theater, art, literature, or others. 3 Defining the Humanities Paper Growing up in Louisiana, I have had the honor and pleasure of experiencing many  cultural events, but I must say my favorite cultural experience would have to be music. Music is one of the many ways I could escape from the turmoil in the world. In high school I was in the choir and my favorite instrument at the time was the piano. I use to lead the choir and sometimes, I was able to perform a solo piece while playing my piano. This was a great experience for me because I had an outlet to express myself without even speaking. My high school choir had the honor of performing at a nursing home in St. Francisville where there were a lot of singers,  ex-musicians, and dancers who were in the choir and performed around Louisiana just like my school. They really enjoyed our performance and they shared great stories about the times when they were in high school in the choir, and how music has changed since their days of playing music. Explain how your selected event was an expression of what you know about the humanities, art, style, genius, and culture of the time period it represents. I learned so much about the expression of humanities just by listening to the older  individuals talk about their experiences and the different ways they made music. They did not have the use of different instruments like we have now back in their day. These individuals made music with whatever they had and enjoyed every minute of it. They also expressed their concerns about the schools eliminating the music programs, they feel that they are taking â€Å"art† and â€Å"freedom of expression† away from our youth in the schools. Discuss how your selected form of cultural expression compares with other forms  you know about from the same time period. 4 Defining the Humanities Paper By listening to the elderly individuals at the nursing home it really gave me some insight on the cultural expression I chose which was â€Å"Music†. It showed me just by listening to each of them that music was relevant just as much then as it is now. They just had a different way of making music and expressing themselves while doing so. They made music with cups, buckets, washboards, keys, and anything that made some kind of noise. In today’s society, we now have  advanced technology which allows each one of us to have access to different instruments, such as the piano, guitar, drums, etc. Although the elderly individuals did not have access to the instruments we have today, they still appreciated what they had. Conclusion In conclusion, after conducting research on the topic at hand, the author discussed their informational findings. The author defined the meaning of humanities, discussed a cultural event that was experienced such as music, dance, theater, art, literature, etc. The author then elaborated on how a particular event was an expression of what he/she knew about the humanities, art, style, genius, and culture of the time period it represents. Finally the author explained how the selected form of cultural expression compares with other forms that he/she knew about from the same time period. Hopefully after reading this paper, the audience now have a better knowledge about Humanities in the Past, Present, and Future. 5 Defining the Humanities Paper Reference www. ahdictionary. com/.

Sunday, July 21, 2019

Digital Composite in Special Effects

Digital Composite in Special Effects INTRODUCTION A massive spacecraft hovers over New York, throwing the entire city into shadow. A pair of lizards, sitting in the middle of a swamp, discusses their favourite beer. Dinosaurs, long extinct, live and breathe again, and the Titanic, submerged for decades, sails once more. Usually the credit of all these fantastic visuals given to CGI (computer generated imagery) or computer graphics. Computer graphics techniques, in conjunction with a myriad of other disciplines, are commonly used for the creation of visual effects in feature films. Digital compositing is an essential part of visual effects that are everywhere in the entertainment industry today: In feature films, television commercials, and many TV shows, and its growing. Even a non effects film will have visual effects. Whatever will be the genre of the movie there will always be something that needs to be added or removed from the picture to tell the story. It is the short description of what visual effect are all about adding elements to a picture that is not there, or removing something that you dont want to be there. Digital composite plays a key role in all visual effects. It is the digital compositor who takes these disparate elements, no matter how they were created, and blends them together artistically into a seamless, photorealistic whole. The digital compositors mission is to make them appear as if they were all shot together at the same time, under the same lights with the same camera, then give the shots a final artistic polish with superb color correction. I mentioned earlier that digital compositing is growing. There are two primary reasons for this. First is the steady increase in the use of CGI for visual effects, and every CGI element needs to be composited. The second reason for the increase in digital compositing is that the compositing software and hardware technologies are also advancing on their own track, separate from CGI. This means that visual effects shots can be done faster, more cost effectively, and with higher quality. There has also been a general rise in the awareness of the film-makers in what can be done with digital compositing, which makes them more sophisticated users. STRUCTRURE Introduction Phase I will deal with the history and introduction of compositing. Olden compositing techniques such as optical compositing, in camera effect, background projection, hanging miniatures etc. Apart from all that I will focus on how they were creating ground breaking effects during optical era. What are the advantage and disadvantage of optical compositing? Information hub Phase I will deal with the core concept of live action and Multipass composting with a brief introduction of stereoscopic composting. Under live action compositing I will discuss the basics and core concept of live action compositing such as rotoscopy, retouching, motion tracking with more emphasis on keying. Inside multipass compositing section simply I will focus on core concept of passes, different types of passes, use of passes. Finally a brief introduction of Stereoscopic compositing an emerging technology in the world of computer graphics. Incredible masters Phase I will discuss upon the contribution of pioneers of this sector to develop it up to this extent and also give a brief introduction of the new technologies being used and developed. Case study Phase which is also the last segment of my dissertation proposal I will discuss on the ground breaking effect techniques used in the Hollywood blockbusters such as Terminator, Golden compass and Finding Nemo etc. History of compositing In the summer of 1857, the Swedish-born photographer Oscar G. Rejlander set out to create what would prove to be the most technically complicated photograph that had ever been produced. Working at his studio in England, Rejlander selectively combined the imagery from 32 different glass negatives to produce a single, massive print. It is one of the earliest examples of what came to be known as a combination print. Motion picture photography came about in the late 1800s, and the desire to be able to continue this sort of image combination drove the development of specialized hardware to expedite the process. Optical printers were built that could selectively combine multiple pieces of film, and optical compositing was born. Introduction of Optical compositing Not to be confused with laboratory effects done on an optical printer these use optical attachments which go in front of the lens. The intention of such apparatus is to modify the light path between subject and lens. There are many such accessories available for hire or purchase but frequently they will be constructed for a particular shot. Techniques of Optical compositing Glass Shot Otherwise known as the glass painting, Hall Process or (erroneously) glass matte or matte painting, the glass shot takes the mask painted on a sheet of glass to its logical conclusion. The next stage of complexity is to make these additions to the frame representational instead of purely graphic. For example, lets say that we have a wide shot of a farm with fields stretching off into the distance and require a silhouetted fence in the foreground. If the camera is focused on the distant hills then, with a sheet of glass positioned at the hyper focal distance (near point still in focus when focused on infinity), we can actually paint the piece of fence on to the glass. This is made possible by the two-dimensional quality of motion pictures. So long as nothing passes between the glass and the lens, and the glass is in focus, then an object painted to be the correct size for the scene when viewed through the lens will appear to be actually in that scene. Thus the silhouette of a fence pa inted on the glass will appear totally believable, even if a cowboy and his horse pass by in the scene beyond. This minor change actually represents a fundamental leap in our effects capability, for now our mask has become a modification to the picture content itself rather than just an external decoration. However, once we have made this philosophical leap it is a small step to move on to creating photorealistic additions to the scene. The next stage is to light the camera side of our glass and paint details into the image thereon. In the example of the fence we now paint in the texture of the wood and expose it as required to blend in with the scene. Glass painting is a fundamental technique of VFX and can be applied to the latest digital equipment just as easily as it was to film prior to the First World War. Basically, if opaque paints are used (or are painted over an opaque base paint) what one is effectively doing is covering over detail in the real image with imaginary additions. This is a replacement technique and is the first of many in the VFX arsenal which permits falsification of real images. Rotoscopy Frequently, it comes to pass that a character or object that was not shot on bluescreen needs to be isolated for some reason, perhaps to composite something behind it or maybe give it a special color correction or other treatment. This situation requires the creation of a matte without the benefit of a bluescreen, so the matte must be rotoscoped, which means it is drawn by hand, frame by frame. This is a slow and labor-intensive solution, but is often the only solution. Even a bluescreen shot will sometimes require rotoscoping if it was not photographed well and a good matte cannot be extracted. Virtually all compositing programs have some kind of rotoscoping capability, but some are more capable than others. There are also programs available that specialize in just rotoscoping. Each frame of the picture is put up on the monitor and the roto artist traces an outline around the characters outer edge. These outlines are then fi lled in with white to create the familiar white matte on a black background, like the example in Figure 1-12. Large visual effects studios will have a dedicated roto department, and being a roto artist is often an entry-level position for budding new digital compositors. There has even been a recent trend to use rotoscoping rather than bluescreen shots for isolating characters for compositing in big-effects fi lms. I say big-effects films because it is much more labor-intensive, and therefore, expensive to rotoscope a shot than to pull a bluescreen matte. The big creative advantage is that the director and cinematographer can shoot their scenes on the set and on location naturally, rather than having to shoot a separate bluescreen shot with the talent isolated on a bluescreen insert stage. This allows the movies creators to focus more on the story and cinematography rather than the special effects. But again, this is a very expensive approach. Rotoscoping is the process of drawing a matte frame-by-frame over live action footage. Starting around the year 50 B.C. (Before Computers), the technique back then was to rear project a frame of fi lm onto a sheet of frosted glass, then trace around the target object. The process got its name from the machine that was used to do the work, called a rotoscope. Things have improved somewhat since then, and today we use computers to draw shapes using the splines we saw in Chapter 5. The difference between drawing a single shape and rotoscoping is the addition of animation. Rotoscoping entails drawing a series of shapes that follow the target object through a sequence of frames. Rotoscoping is extremely pervasive in the world of digital compositing and is used in many visual effects shots. It is also labor intensive because it can take a great deal of time to carefully draw moving shapes around a moving target frame by frame. It is often an entry-level position in the trade and many a digital compositor has started out as a roto artist. There are some artists who fi nd rotoscoping rewarding and elect to become roto kings (or queens) in their own right. A talented roto artist is always a valued member of the visual effects team. In this chapter, we will see how rotoscoping works and develop an under standing of the entire process. We will see how the spline-based shapes are controlled frame-by-frame to create outlines that exactly match the edges of the target object, as well as how shapes can be grouped into hierarchies to improve productivity and the quality of the animation. The sections on interpolation and keyframing describe how to get the computer to do more of the work for you, and then fi nally the solutions to the classic problems of motion blur and semi-transparency are revealed. ABOUT ROTOSCOPING Today, rotoscoping means drawing an animated spline-based shape over a series of digitized fi lm (or video) frames. The computer then renders the shape frame-byframe as a black and white matte, which is used for compositing or to isolate the target object for some special treatment such as color correction. The virtue of roto is that it can be used to create a matte for any arbitrary object on any arbitrary background. It does not need to be shot on a bluescreen. In fact, roto is the last line of defense for poorly shot bluescreens in which a good matte cannot be created with a keyer. Compositing a character that was shot on an uncontrolled background is illustrated beginning with Figure 6-4. The bonny lass was shot on location with the original background. A roto was drawn (Figure 6-5) and used to composite the woman over a completely new background (Figure 6-7). No bluescreen was required. There are three main downsides to roto. First, it is labor intensive. It can take hours to roto a simple shot such as the one illustrated in Figure 6-4, even assuming it is a short shot. More complex rotos and longer shots can take days, even weeks. This is hard on both schedules and budgets. The second downside to roto is that it can be diffi cult to get a high quality, convincing matte with a stable outline. If the roto artist is not careful, the edges of the roto can wobble in and out in a most unnatural, eye-catching way. The third issue is that rotos do not capture the subtle edge and transparency nuances that a well-done bluescreen shot does using a fi ne digital keyer. If the target object has a lot of very fi ne edge detail like a frizzy head of hair, the task can be downright hopeless. SPLINES In Chapter 5, we fi rst met the spline during the discussion of shapes. We saw how a spline was a series of curved lines connected by control points that could be used to adjust the curvature of those lines. We also used the metaphor of a piano wire to describe the stiffness and smooth curvature of the spline. Here we will take a closer look at those splines and how they are used to create outlines that can fi t any curved surface. We will also push the piano wire metaphor to the breaking point. A spline is a mathematically generated line in which the shape is controlled by adjustable control points. While there are a variety of mathematical equations that have been devised that will draw slightly different kinds of splines, they all work in the same general way. Figure 6-8 reviews the key components of a spline that we saw in Chapter 5, which consisted of the control point, the resulting spline line, and the handles that are used to adjust its shape. In Figure 6-8, the slope of the spline at the control point is being adjusted by changing the slope of the handles from position 1 to position 2 to position 3. For clarity, each of the three spline slopes is shown in a different color. The handles can also adjust a second attribute of the spline called tension, which is shown in Figure 6-9. As the handles are shortened from position 1 to 2 to 3, the piano wire loses stiffness and bends more sharply around the control point. A third attribute of a spline is the angle where the two line segments meet at the control point. The angle can be an exact 180 degrees, or fl at, as shown in Figure 6-8 and Figure 6-9, which makes it a continuous line. However, a break in the line can be introduced like that in Figure 6-10, putting a kink in our piano wire. Figure 6-10 Adjusting angle. Figure 6-11 Translation. Figure 6-12 Mr. Tibbs. Figure 6-13 Roto spline. Figure 6-14 Finished roto. In addition to adjusting the slope, tension, and angle at each control point, the entire shape can be picked up and moved as a unit. It can be translated (moved, scaled, and rotated), taking all the control points with it. This is very useful if the target has moved in the frame, such as with a cam era pan, but has not actually changed shape. Of course, in the real world it will have both moved and changed shape, so after the spline is translated to the new position, it will also have to be adjusted to the new shape. Now lets pull together all that we have learned about splines and how to adjust them to see how the process works over an actual picture. Our target will be the insufferable Mr. Tibbs, as shown in Figure 6-12, which provides a moving target that also changes shape frame-by-frame. Figure 6-13 shows the completed shape composed of splines with the many control points adjusted for slope, tension, and angle. The fi nished roto is shown in Figure 6-14. One very important guideline when drawing a shape around a target object is to use as few control points as possible that will maintain the curvatures you need. This is illustrated by the shape used to roto the dapper hat in Figure 6-15, which uses an excessive number of control points. The additional points increase the amount of time it takes to create each keyframe because there are more points to adjust each frame. They also increase the chances of introducing chatter or wobble to the edges. ARTICULATED ROTOS Things can get messy when rotoscoping a complex moving object such as a person walking. Trying to encompass an entire character with crossing legs and swinging arms into a single shape like the one used for the cat in Figure 6-13 quickly becomes unmanageable. A better strategy is to break the roto into several separate shapes, which can then be moved and reshaped independently. Many compositing programs also allow these separate shapes to be linked into hierarchical groups where one shape is the child of another. When the parent shape is moved, the child shape moves with it. This creates a skeleton with moveable joints and segments rather like the target object. This is more effi cient than dragging every single control point individually to redefi ne the outline of the target. When the roto is a collection of jointed shapes like this, it is referred to as an articulated roto. Figure 6-17 through Figure 6-19 illustrates a classic hierarchical setup. The shirt and lantern are separate shapes. The left and right leg shapes are children of the shirt, so they move when the shirt is moved. The left and right feet are children of their respective legs. The light blue lines inside the shapes show the skeleton of the hierarchy. To create frame 2 (Figure 6-18), the shirt was shifted a bit, which took both of the legs and feet with it. The leg shapes were then rotated at the knee to reposition them back over the legs, and then the individual control points were touched up to complete the fi t. Similarly, each foot was rotated to its new position and the control points touched up. As a result, frame 2 was made in a fraction of the time it took to create frame 1. Frame 3 was similarly created from frame 2 by shifting and rotating the parent shape, followed by repositioning the child shapes, then touching up control points only where needed. This workfl ow essentially allows much of the work invested in the previous frame to be recycled into the next with just minor modifi cations. There is a second, less obvious advantage to the hierarchical animation of shapes, and that is it results in a smoother and more realistic motion in the fi nished roto. If each and every control point is manually adjusted, small variations become unavoidable from frame to frame. After all, we are only human. When the animation is played at speed, the spline edges will invariably wobulate (wobble and fl uctuate). By translating (moving) the entire shape as a unit, the spline edges have a much smoother and more uniform motion from frame to frame. INTERPOLATION Time to talk temporal. Temporal, of course, refers to time. Since rotos are a frameby- frame animation, time and timing are very important. One of the breakthroughs that computers brought to rotoscoping, as we have seen, is the use of splines to defi ne a shape. How infi nitely fi ner to adjust a few control points to create a smooth line that contours perfectly around a curved edge, rather than to draw it by hand with a pencil or ink pen. The second, even bigger breakthrough is the ability of the computer to interpolate the shapes, where the shape is only defi ned on selected keyframes, and then the computer calculates the in-between (interpolated) shapes for you. A neat example of keyframe interpolation is illustrated in Figure 6-20. For these fi ve frames, only the fi rst and last are keyframes, while the three in-between frames are interpolated by the computer. The computer compares the location of each control point in the two keyframes, then calculates a new position for them at each in-between frame so they will move smoothly from keyframe 1 to keyframe There are two very big advantages to this interpolation process. First, the number of keyframes that the artist must create is often less than half the total number of frames in the shot. This dramatically cuts down on the labor that is required for what is a very labor-intensive job. Second, and perhaps even more important, is that when the computer interpolates between two shapes, it does so smoothly. It has none of the jitters and wobbles that a clumsy humanoid would have introduced when repositioning control points on every frame. Bottom line, computer interpolation saves time and looks better. In fact, when rotoscoping a typical character it is normal to keyframe every other frame. The interpolated frames are then checked, and only an occasional control point touch-up is applied to the in-between frames as needed. KEYFRAMES In the previous discussion about shape interpolation, the concept of the keyframe was introduced. There are many keyframing strategies one may use, and choosing the right one can save time and improve the quality of the fi nished roto. What follows is a description of various keyframe strategies with tips on how you might choose the right one for a given shot. On 2s A classic and oft used keyframe strategy is to keyframe on 2s, which means to make a keyframe at every other frame—that is, frame 1, 3, 5, 7, and so forth. The labor is cut in half and the computer smoothes the roto animation by interpolating nicely in between each keyframe. Of course, each interpolated frame has to be inspected and any off-target control points must be nudged into position. The type of target where keyframing on 2s works best would be something like a walking character shown in the example in Figure 6-21. The action is fairly regular, and there are constant shape changes, so frequent keyframes are required. Figure 6-21 Keyframe on 2s. On shots where the action is regular but slower, it is often fruitful to try keyframing on 4s (1, 5, 9, 13, etc.), or even 8s (1, 9, 17, 25, etc.). The idea is to keep the keyframes on a binary number (on 2s, on 4s, on 8s, etc.) for the simple reason that it ensures you will always have room for a new keyframe exactly halfway between any two existing keyframes. If you keyframe on 3s (1, 4, 7, etc.) for example, and need to put a new keyframe between 1 and 4, the only choice is frame 2 or 3, neither of which is exactly halfway between them. If animating on 4s (1, 5, 9, etc.) and you need to put a new keyframe between 5 and 9, frame 7 is exactly halfway between them. Figure 6-22 shows the sequence of operations for keyframing a shot on 2s in two passes by fi rst setting keyframes on 4s, then in-betweening those on 2s. Pass 1 sets a keyframe at frames 1, 5, and 9, then on a second pass the keyframes are set for frames 3 and 7. The work invested in creating keyframes 1 and 5 is partially recovered when creating the keyframe at frame 3, plus frame 3 will be smoother and more natural because the control points will be very close to where they should be and only need to be moved a small amount. Bifurcation Another keyframing strategy is bifurcation, which simply means to fork or divide into two. The idea is to create a keyframe at the fi rst and last frames of a shot, then go to the middle of the shot and create a keyframe halfway between them. You thengo mid-way between the fi rst keyframe and the middle keyframe and create a new keyframe there, then repeat that for the last frame and middle frame, and keep subdividing the shot by placing keyframes midway between the others until there are enough keyframes to keep the roto on target. The situation where bifurcation makes sense is when the motion is regular and the object is not changing its shape very radically such as the sequence in Figure 6- 23. If a keyframe were fi rst placed at frame 1 and frame 10, then the roto checked mid-way at frame 5 (or frame 6, since neither one is exactly mid-way), the roto would not be very far off. Touch up a few control points there, and then jump midway between frames 1 and 5 and check frame 3. Touch up the control points and jump to frame 8, which is (approximately) mid-way between the keyframes at frame 5 and frame 10. Figure 6-24 illustrates the pattern for bifurcation keyframing. While you may end up with keyframes every couple of frames or so, bifurcation is more effi cient than simply starting a frame 1 and keyframing on 2s, that is, assuming the target object is suitable for this approach. This is because the computer is interpolating the frames for you, which not only puts your shapes control points close to the target to begin with, but it also moves and pre-positions the control points for you in a way that the resulting animation will be smoother than if you tried to keyframe it yourself on 2s. This strategy effi ciently recycles the work invested in each keyframe into the new in-between keyframe. Extremes Very often the motion is smooth but not regular, such as the gyrating airplane in Figure 6-28, which is bobbing up and down as well as banking. In this situation, a good strategy is to keyframe on the extremes of the motion. To see why, consider the airplane path plotted in Figure 6-25. The large dots on the path represent the airplanes location at each frame of the shot. The change in spacing between the dots refl ects the change in the speed of the airplane as it maneuvers. In Figure 6-26, keyframes were thoughtlessly placed at the fi rst, middle, and last frames, represented by the large red dots. The small dots on the thin red line represent where the computer would have interpolated the rotos using those keyframes. As you can see, the interpolated frames are way off the true path of the airplane. However, in Figure 6-27, keyframes were placed on the frames where the motion extremes occurred. Now the interpolated frames (small red dots) are much closer to the true path of the airplane. The closer the interpolation is to the target, the less work you have to do and the better the results. To fi nd the extremes of a shot, play it in a viewer so you can scrub back and forth to make a list of the frames that contain the extremes. Those frames are then used as the keyframes on the fi rst roto pass. The remainder of the shot is keyframed by using bifurcation. Referring to a real motion sequence in Figure 6-28, the fi rst and last frames are obviously going to be extremes so they go on our list of keyframes. While looking at the airplanes vertical motion, it appears to reach its vertical extreme on frame 3. By placing keyframes on frame 1, 3, and 10, we stand a good chance of getting a pretty close fi t when we check the interpolation at frame 7 (see Figure 6-29). If the keyframe were placed at the midpoint on frame 5 or 6, instead of the motion extreme at frame 3, the roto would be way off when the computer interpolates it at frame 3. Final Inspection Regardless of the keyframe strategy chosen, when the roto is completed it is time for inspection and touch-up. The basic approach is to use the matte created by the roto to set up an inspection version of the shot that highlights any discrepancies in the roto, then go back in and touch up those frames. After the touch-up pass, one fi nal inspection pass is made to confi rm all is well. Figure 6-30 through Figure 6-32 illustrates a typical inspection method. The roto in Figure 6-31 was used as a mask to composite a semi-transparent red layer over the fi lm frame in Figure 6-32 to highlight any discrepancies in the roto. It shows that the roto falls short on the white bonnet at the top of the head and overshoots on the side of the face. The roto for this frame is then touched up and the inspection version is made again for one last inspection to confi rm all the fi xes and that there are no new problems. Using this red composite for inspection will probably not work well when rotoscoping a red fi re engine in front of a brick building. Feel free to modify the process and invent other inspection setups based on the color content of your personal shots. MOTION BLUR One of the historical shortcomings of the roto process has been the lack of motion blur. A roto naturally produces clean sharp edges as in all the examples we have seen so far, but in the real world, moving objects have some degree of motion blur where their movement has smeared their image on the fi lm or in the video. Figure 6-33 shows a rolling ball of yarn with heavy motion blur. The solution is an inner and outer spline that defi nes an inside edge that is 100% solid, and an outside edge that is 100% transparent as shown in the example in Figure 6-34. The roto program then renders the matte as 100% white from the inner spline graduating off to black at the outer spline. This produces a motion-blurred roto such as the one shown in Figure 6-35. Even if there is no apparent motion blur in the image, it is often benefi cial to gently blur the rotos before using them in a composite to soften their edges a bit, especially in fi lm work. One problem that these inner and outer splines introduce, of course, is that they add a whole second set of spline control points to animate which increases the labor of an already labor intensive process. However, when the target object is motion blurred, there is no choice but to introduce motion blur in the roto as well. A related issue is depth of fi eld, where all or part of the target may be out of focus. The bonny lass in Figure 6-4, for example, actually has a shallow depth of fi eld so her head and her near shoulder are in focus, but her far shoulder is noticeably out of focus. One virtue of the inner and outer spline technique is that edge softness can be introduced only and exactly where it is needed so the entire roto does not need to be blurred. This was done for her roto in Figure 6-5. SEMI-TRANSPARENCY Another diffi cult area for rotoscoping is a semi-transparent object. The main difficulty with semi-transparent objects is that their transparency is not uniform as some areas are denser than others. The different levels of transparency in the target mean that a separate roto is required for each level. This creates two problems. The fi rst is that some method must be devised for reliably identifying each level of transparency in the target so it may be rotoscoped individually, without omission or overlap with the other regions. Second, the roto for each level of transparency must be made unique from the others in order to be useful to the compositor. A good example of these issues is the lantern being carried by our greenscreen boy. A close-up is shown in Figure 6-36. When a matte is created using a high quality digital keyer (Figure 6-37), the variable transparency of the frosted glass becomes apparent. If this object needed to be rotoscoped to preserve its transparency, we would n eed to create many separate roto layers, each representing a different degree of transparency. This is usually done y making each roto a different brightness; a dark gray roto for the very transparent regions, medium brightness for the medium transparency, and a bright roto for the nearly solid transparency. While it is a hideous task, I have seen it done successfully. Motion tracking and Stabilizing MOTION TRACKING One of the truly wondrous things that a computer can do with moving pictures is motion tracking. The computer is pointed to a spot in the picture and then is released to track that spot frame after frame for the length of the shot. This produces tracking data that can then be used to lock another image onto that same spot and move with it. The ability to do motion tracking is endlessly useful in digital compositing and you can be assured of getting to use it often. Motion tracking can be used to track a move, a rotate, a scale, or any combination of the three. It can even track four points to be used with a corner pin. One frequent application of motion tracking is to track a mask over a moving target. Say you have created a mask for a target object that does not move, but there is a camera move. You can draw the mask around the target on frame 1, then motion track the shot to keep the mask following the target throughout the camera move. This is much faster and is of higher quality than rotoscoping the thing. Wire and rig removal is another very big use for motion tracking. A clean piece of the background can be motion tracked to cover up wires or a rig. Another important application is monitor screen replacement, where the four