Education Leadership

The world in which students learn today is quite different from the one their parents studied in. Many parents and teachers still expect students of today to learn the way they did a generation or two ago. Todays students are operating on a very different platform. The methods of engagement to begin with are quite different, primarily because of the technological facilities available to students today. Marc Prenskys article Educational Leadership highlights points such as these among others, and urges educators to reflect on how they engage their students for an improved learning experience.

The stage of engagement is most important to address because it sets the tone for teaching students today. It is motivation that follows this stage, and without these two features it becomes tedious to get them to learn effectively. According to Prensky, adults need to realize that students are completely engaged in the digital world outside school. Therefore, it is understandable that they are not motivated in school when they dont encounter a similar digital experience. Usually, one would be inclined to think that educators need to implement the use of expensive graphics or multimedia when a digital experience is suggested. However, what Pensky refers to is simply including gameplay. Indeed, this is what a lot of students are quite used to. The learning experience could be tremendously transformed with students demonstrating a very high level of enthusiasm.

Apart from introducing gameplay, students should be consulted when it comes to making decisions. Educators are no longer in the position to make decisions for students. Instead, educators need to take into consideration how students would like to learn and organize the methods of learning. In order to carryout any of this an educational institution must be flexible.

Students today are largely able to run programs on their own. So what educators ought to be looking at is encouraging students to widen their array of program operations. This will only help them to conduct more and more operations independently.

Educators should not hold themselves or students back by looking after legacies. Instead, they should now look for alternatives that can ameliorate future learning processes. This will help to reduce the gap between school activities and after school activities, which means the school could be as much fun as after school for students. If educators can accomplish this, they will succeed in empowering students in school as much as they are after school.

GMOs

The debate over the safety of genetically modified foods on our health has received many varied resolutions from the public domain. Ongoing scientific research has claimed substantial levels of health safety for using GMOs by human beings. However, opponents of biotechnology have claimed that the modification of foods have long term side effects to the health of the users (Atherton, 2002). They claim that such foods lack natural nutrition constitutes and are thus not evidently safe for the long term use by human being. The intensified use of such foods has been indeed closely been associated with many health complications such as increasing rates of obesity in the global community (Whitman, 2000).

This paper is a critical discussion on the question of whether or not genetically modified organisms are safe for use by humans. The author in particularly argues in support of the thesis that GMOs are not safe for all age groups.

Definition of GMOs
GMOs can be simply defined as any kind of genetically modified plant, animal or microorganism (Karlsson, 2003). This is to mean that the genes of the living organism have been altered to meet some preset survival andor productivity characteristics. The process of modifying the genes involves either using molecular genetics techniques or protein engineering. It is through this process that non-native proteins which have been modified to match with the expressions of the new host are implanted onto the organism (Karlsson, 2003). The process of modifying ensures that the genetic differences between the organisms is overcome thus allowing for proper transporting of the protein in the cells.

Assessing Safety of GMOs
The debate on GMOs has received many contradicting opinions from scientific researchers. Although the process of authorization for use of GMO products is claimed to be based on safety assessment, no safety evidence has been identified for the long term use of such products (Roth, 2000). It is due to this reasons that there still exists many doubts on whether or not such foods are fit for human consumption. Statistical evidence has shown that the extensive use of genetically modified foods in the American nation is a leading cause of the high rates of obesity among the young of the nation (Whitman, 2000). Artificial sweeteners and soft drinks as well as fast foods are examples of such GMO products which have been evidently found to be a threat to the health of the citizens. It has been evidently claimed that three out of every four kids in the American nation are either victims or at risk of getting obesity (Whitman, 2000).

Why GMOs are not safe for all age groups
There are many reasons for the negation of the use of GMO in the society. The first is that they are highly nutritive and thus extensive use threatens overwhelming the body digestive and metabolism effectiveness thus risking the health of the use (Roth, 2000). This is best explained by assessing the level of artificial nutrition found in fast foods. Another reason is that the genetic modification of an organism is evidently found to alter its overall functioning (Atherton, 2002). This means that its natural constitutions are great compromised thus greatly compromising their potential nutrition values.

It is however to be realized that with the climatic changes which are taking place in the world, the development and subsequent use of genetically modified foods in the society will always be a necessity. It is based no this reason that the future generations of this globe will live to deal with the side effects of GMOs as such will always remain a necessity in ensuring sustainable food security (Roth, 2000).

In conclusion, it is evidently clear that the debate over whether or not GMO products are safe for use by humans is far from over. However, it is to be noted that there is much need for conducting more critical research to ascertain the long term safety of GMOs to the human health.

Understanding Fossil Viruses

The article discusses about what they call fossil viruses  viruses that have infected living beings larger than an amoeba and have become dormant, failing to reproduce and evolve. Such viruses, when attached to egg or sperm cells can be transmitted to the offspring, thus turning into a part of the genetic line and over time, becomes disabled, if it were still reproducing and evolving when it was passed on.

Such studies have allowed scientists to get another view of how viruses evolved. This understanding could lead to the further understanding of H.I.V and how to best combat it. Moreover, fossil viruses are adding a new dimension to the theory of evolution. With an estimate of 8.3 percent of the human genome which can be traced back to retrovirus infections, scientists have concluded there must be more of these hidden in our bodies.

The one discovered in humans, and the focal point of the article, called the borna virus is only the beginning of such discoveries as scientists presume, finding other fossil viruses in mammals such as apes, squirrels, guinea pigs, elephants and monkeys. These bona viruses are found, not be destructive, but to be benefiting our bodies. However, the how exactly is unclear as opposed to others, used to make syncytin, which is found to ward off viral invasions and enable mothers to send nutrients to their embryos.

Scientifically Sound
The article, I believe, is good science. Although linking it to the much debated topic about evolution could send creationists dismissing such discoveries as proof of evolution, no one could discount the fact that this discovery is very useful. This is true particularly in the field of understanding viruses and how to combat them, especially pertaining to retroviruses such as HIV, plaguing our society. This further understanding of viruses can also translate to it being useful science that can advance medical science.

Lessons
From this article, I learned, first of all, the existence of fossil viruses, what they are and how they came to be. I also learned that not all viruses can be considered harmful for the body, as in the case of fossil viruses that are used to make syncytin. Finally, I learned that viruses can be transferred from one generation to another by means of integrating themselves into the egg andor sperm cells of parents.

Class Associations
The article can very well relate to genetics, the evolution of life and human biology. Genetics could be related since the article since viruses have genetic codes that embed themselves into a DNA strand as discussed in the article. This can also fall under the evolution of life especially because this is another proof of evidence of our connection with our primal primate counterparts. Its connection to human biology is obvious since it deals with further understanding of the human body.

Strengths and Weaknesses of Global Information Systems

The original purpose of a computer network was to promote the basics of communication within scientific users, but now it has become a social tool as the work continue to experience major technological advancements. Very many people are today being able to access the internet regularly through World Wide Web as a source of information. Different Global Information Systems will be developed differently, either by a single organization, or by many organizations coming together to have a jointly owned GIS System so that no particular company owns the system. This term paper will discuss the major strengths and weaknesses with every kind of GIS System, and the decisions that are vital in coming up with a GIS System.

For the last ten years, the main purpose of a computer network was to promote the basics of communication within scientific users, but now it has been a socio-economic phenomenon which is seen by many as the right tool which will bring about economic development in the world. Press coverage involving the internet use has also become quite ubiquitous and extremely frequent. Over 30 million people across the world are now accessing the internet regularly through World Wide Web, WWW, as the major source of source of information. At the same time, more people and organizations have been recognizing the existing potential in internet use which will result in economic growth, and therefore the organizations are seeking for the Internet use in all their marketing and selling operations. Actual trade through the Internet has been on rise over the past decade (Heinzl  Rudy, 2006). However, it sounds good to talk about the Global Information systems, it is necessary to note that there are a number of strengths and weaknesses which will result from any Information System. In terms of Global Information Systems, we can have a Global Information Systems created and owned by a given organization, and another one which has been developed by a number of multiple partners, while owned by none. A good example is the World Wide Web.

Strengths and Weaknesses of Different GIS Systems
Global Information Systems Incorporation has been a reputable company providing Information Technology Services and continuity of business solutions and services since in the mid 1996. The main business of the company has always been focused on the custom applications and system developments, integration of systems and designing management infrastructures. With its headquarters in Michigan, the corporation has a number of development centers and offices in different parts of United States like in Southfield, Michigan and Pennsylvania, and Allenton. This has made the company able to serve a number of companies across the country. Through the necessary experience and knowledge, the company has been able to succeed in most of its business operations (Mena, 2001). There is also the incorporation of equipped technological centers and in-house developments. This is developed with the necessary software and hardware. If we look at this company, we will note that it has been able to offer GIS Systems to a number of companies and people around the country. But there are a number of issues that will arise from this plan.

There are very many responsibilities which have to be offered in any given GIS. Therefore, it requires any GIS system to offer these responsibilities. The strength of a system owned by one company is that high security is maintained since the company provides the IP addresses. On the other hand, the World Wide Web will not be able to offer such services and the reason it can be highly compromised (Kenneth, 2007). For instance, passwords which are used in the WWW are themselves weak in protecting ones information for a number of reasons. The main reason is that all passwords will depend on a very weak link connected to a computer and chains of network security, all controlled by the human user. This has been the major cause for cyber crime which is common while using the WWW whereby no single company is held responsible. On the other hand, there are a number of strengths and importance of WWW. One, the user can easily get fast and cheap publication and distribution of materials unlike with a company owned GIS. Once that has been done, materials will circulate around the globe instantly since world web reaches the largest portion of the population.

Generally, the wide applications and strengths of the WWW are quite explicit on themselves. For instance, the web is very easy to use due to the connectivity interfaces, presence of hypertexts and links in documents, integrated images and sound when viewing, and limited abilities of editing or formatting documents displayed on the web. This ensures that the data contained there-in is not in any case compromised. Another thing with the web is that it provides the users with servers who are available in all hardware platforms. This ensures that a document published in one language can be easily viewed and available in other formats, and operating systems. Finally, the ease with which we can use the World Wide Web is very promising. Web browsers have made it easy to use the internet.

The major weaknesses are that the creators of the web pages and documents care little about their appearance on the user. There is also no any documented and acceptable method in displaying line drawn vector graphs, and graphics. This is only done through bit-map which doesnt relay the best information to the user (Jones, 2002). The other thing is the slowness of the sites between the server and the user. When the response needed is instantaneous, sometimes an individual will have to wait for long than expected. It also very hard for the server to keep and maintain all the statuses of the users and their information since the web has been designed for discrete transactions only, so that once the link is broken, the server will therefore have to forget about that particular user.

On the other hand, a GIS system owned by one organization will tend to address most of the above weaknesses with the World Wide Web. This will therefore become the source of strength for the single corporation owned GIS. But that having been solved, there are a number of weaknesses which emerge with it. For example, it is not easy to contact and make information reach a lot of people. Also, it may prove expensive to deal with cases of security since the organization is liable to all transactions through its system (Gunasekaran, 2009). Also, the maintenance is also expensive as compared with the World Wide Web and its use.

Architecture, Standards, Support and Types of Systems
Different GIS Systems will have different standards, support systems and architecture. This difference occurs since it should be necessary for the system to be able to meet its goals and provide the user with the best services. The major important system types for both GIS Systems are the integration of software and hardware. These should be adopted depending on the demands, standards and effectiveness in providing the use with the best results (DeMers, 2005). The GIS Systems will incorporate a complex architecture in its functioning. These systems are brought together in such a way that they can bring about integration of sources between the server and the user. There is also an effectiveness coordination whereby more and more developments and implementations are done to improve the architectural constitution of the system. There should be good interconnections between the servers and the users. In that given GIS, it would be necessary that relevant standards are met which are acceptable in the Global perspective. There are a number of elements which have to be considered here for example, the system should be able to meet all the needs of a GIS system in order to meet the requirements of the user. Others include the use of a language like English which is acceptable is default language, and also ensure to retain the standards of measurements and time which are universal.

Decisions in Designing and Pursuing a Global Information System
There are a number of decisions which have to be put into consideration whenever developing and pursuing a GIS system. This is to ensure that all the issues are addressed from time to time and be in a position of providing the necessary service. For both GIS Systems owned by many organizations and a single organization, the very first vital decision lies in the Database Warehousing. All the data to be used in the system should have a secure warehouse whereby data can be easily accessed (Borgman, 2003). The other important decision lies in the question of outsourcing. This is an inherent question whenever an organization is to organize and design a GIS which will functions effectively. There should also be question of system sustainability and capability to meet the requirements of the user. The tools and systems which are to be integrated in the overall system should be in a position of supporting the entire system. For instance, in case of a single organization, it would be necessary that financial matters are put into considerations in order to ensure that all the users will be provided with the necessary services which cannot be easily compromised. Also, the managerial perspective of the system has to be addressed in a competent manner. Managers should be able to perceive and respond to all information and questions raised within the organizations GIS this will be important in making further decisions. GIS Systems should also be designed in a way that they can combat overloads of information in the system. This should be done through practical consideration in the GIS Systems (Avgerou, 2002).

In conclusion, the world really needs GIS Systems if we were to have further human economic developments. This is because the web will help speed up operations and transactions being done today. Therefore, it would be required that any global information system provider to meet all the requirements for an effective usage of the system. Basically, there should always be continuity in decision making for any GIS system to ensure there is security and user satisfaction. This should be done in a manner which meets all the protocols and requirements for internet connections around the globe (Kumar, 2000). These decisions should be necessary in addressing all the given technological challenges that may face the system.

CorvetteZo6

The moment one sees the symbolic flag on the bonnet and 4 tail lights at the rear you can easily guess it is the Corvette. As an old saying goes Rome was not built in a day. Similarly, it took 60 years of constant research and planning to beget Corvette Z06. This car is a jewel in GMs stable. Everything in this car is remarkable right from the aluminum frame to the rear wind screen.

Unlike other conventional cars which are manufactured from a single factory, Corvette is produced from 3 factories. The primary factory is in Bowling Green in Kentucky where the assembly of the car is done followed by Hopkins in Kentucky where Dana technologies builds the aluminum frame of the CorvetteZ06 which is 30 lighter than the predecessor and the heart of the car which is the engine comes from a factory near Detroit. It takes a total of 1200 employees to manufacture this scintillating motor piece.

The Bowling Green factory is a state of the art. The factory is divided such that the car goes through several areas before it can be declared fit for the road. Trim Area, Testing, Paint shop, Chassis and Car Track are the phases that the car undergoes. All these phases are completely sequential and dependent. Even though there is involvement of several computers and humans yet the process seems completely automated. The process is extremely rapid.

For example it just takes 18 minutes to build the super light frame of the CorvetteZ06. The paint process takes 10 hours. But the paint job isnt conventional too. The workers have to remove their dirt and go through a crater test so that they are not contaminated before they can enter the paint job. I guess there couldnt be any other way to prove the level of professionalism.

One of the most impressing technologies that I witnessed is when the frame takes a dip in a protective solution called electro deposition where it is painted black to protect it from the rust. What also catches my eye is the immaculate paint job by the robots and putting the adhesive on the frame. The factory has 2 paint lines with each painting 350 panels a day using 66.84 gallons of paint. Another marvel in the CorvetteZ06 is the light weight body panels. Last but surely not the least is the tear drop rear window which has been the distinctive part of corvette since 1963. It is not just for cosmetic but also aerodynamic providing a smooth flow of air.              

The real gem which makes the Corvette to stand ahead of its competitors is the small block V-8 engine invented in 1955 by chief engineer of Chevrolet. It was light weight, robust and powerful. This car can never be completed until I brief about the engine. The 7 liter LS7 V8 engine churns out 505 HP married to 6 speed manual transmission taking it to 60 miles an hour in just 3.7 seconds and a top speed of 200 mph. These performance figures are commensurate to any other sports car like the Ferrari 430 or Lamborghini Gallardo. This engine is hand built by one technician just like any sports car in GMs performance center in Wixom Michigan.

But just when I was thinking that on the safety front the corvette has nothing more to offer than the conventional airbags, ABS and a rigid frame, I was left awestruck when I learnt that the car seats are computerized. The seats have an inbuilt device which is so sensitive that they can sense whether an adult is sitting or a child and subsequently change the settings of the airbags.

The technologies used in making of the CorvetteZ06 are simply marvelous. I wish to see more of these technologies in common cars like the use of carbon fiber fenders which weigh only 3 pounds and fiber glass panels. I think they can be easily incorporated in some common cars. The carbon fibers can also be quite useful in airplanes in reducing their weight and making them a lot more efficient. Also the precautions that the technicians take by inhibiting the use of silicon to get a beautiful paint job seem to be quite effective.

Great things are not done by impulse but by a series of small things put together. The manufacturing of the corvette seems to be based on this saying. In the world where it is believed that Americans can only build muscle cars which are defamed for their poor handling, the corvette Z06 silents its critics completely. The car starts at an attractive pricing of 74,285 which is value for money. Today it is regarded as one of the best cars in the world and is a dream car for several Americans. The legacy of corvette will be hopefully carried forward for several years.

Decision Support System, How It Relates To Modeling and Risk Analysis

A Decision Support System (DSS) is defined as all computer based components that support decision making.  Decision support systems use intelligent knowledge based systems and are a subsystem of organizational information system.  Decisions are a component of any operational system thus the inclusion of a support system that facilitates decision making is a measure that is directed towards improving systems within an organization.  Modeling and risk taking are aspecsts that are widely incorporated into organizational operations.  There are multiple internal and external factors that businesses consider in their operations.  The external environment is dynamic and the relationship between the internal factors that affect system operations is not always clear.  This paper seeks to determine how decision support systems relate to risk taking and modeling within organizations.  Determination of their relationship and its significance to organizations is the main issue that will be tackled in the paper.

Decision Support Systems
DSS support organizational decision making activities and if properly designed can help in the compilation of raw data that can facilitate value generation within firms.  It is evident that a DSS is not only an information repository but also has algorithms and models that can aid optimal solution of typical business problems (Burstein,  Holsapple, 2008).  The typical information that can be gathered by DSS applications includes an inventory of all information assets, comparative data and projected revenue figures (Shimizu, de Carvalho,  Jose Barbin, 2006).  The development of decision support systems is partly due to improvement in technology and multiple variables that businesses have to consider in decision making.  The dynamism displayed by operational variables in the modern business environment has led to increased awareness on the need for a universal view of operational variables.  This need for a comprehensive analysis of an organizational scenario before making any decision is also responsible for the increased use of business modeling and risk analysis.

The definition and taxonomy used in DSS are not universal though their role in facilitating organizational operations is universally acknowledged.  There are various categorizations of DSS based on either their functionalities or tasks they carry out.  On the basis of functionality and nature of interaction with the user, there are active, passive and cooperative DSS (Shimizu, de Carvalho,  Jose Barbin, 2006).  On the basis of functionalities, there is the communication driven, data driven, document driven, knowledge driven and model driven DSS.

Modeling 
Modeling is an aspect ingrained in business management that is concerned with the abstraction of real life requirements into a form that can be easily understood by persons with different views.  In an organization, every stakeholder has a different view of the requirements in business operations. For an organization to function optimally, the different views have to be incorporated in organizational decision making and strategic development.  A systemic view of an organization shows that address of the interest and issues faced by different stakeholders is a critical success factor in organizational operations.  Furthermore, business modeling aids in developing an understanding of the interaction between internal systems and external entities (Tennent,  Friend, 2005).  By doing so, a universal picture of the constraints that have to be considered in decision making is developed.  In operations management as an example, modeling plays a role in developing an understanding of the variables and the conditional constraints which can then be programmed to come up with an optimal solution.  Decision support systems can aid in not only identification of various entities and interaction between internal components but also optimization of the developed model.  Another area in which business modeling comes in handy is documentation of systemic development.  Documentation is a critical requirement in organizational operations that not only aids in knowledge development but also evaluation of project performance by aiding determination of whether a resulting system meets its specifications.  Modeling results in compact information laden abstractions of requirements can be analyzed from both high and low level perspectives.  Since DSS require an information repository and have intelligent systems capabilities, business modeling may help in developing the functionalities and accuracy of a DSS.

Risk Analysis
Business operations can be looked at as a combination of projects with clearly set goals and multiple internal and external constraints.  Risk analysis is the art of developing a concise picture of the variables that may impede attainment of set goals.  In a business setting, analysis of the internal and external environment with the aim of developing a concise picture of threats and weakness so that corrective measures can be put in place is referred to as risk analysis.  It is noteworthy that unlike business modeling which may be done once in a project, risk analysis is a continuous process due to the fact that operational variables interact and are dynamic (Aven, 2008).  Moreover, it is not easy to predict accurately the nature of the operational environment and the requirements that have to be considered in executing a project.  Thus, risk analysis within an organization is an ongoing process that facilitates minimization of the effects of threats while attempting to reduce the time and financial costs associated with their management.

The importance of risks analysis is mitigation that is done by minimization of the probability of threats to project completion or success occurring.  From this dimension, it is evident that some element of programming is required in minimization of the likelihood that a threat will occur and development of measures aimed at addressing threats.  This brings out a clear link between DSS and risk analysis in that a decision support systems can facilitate risk analysis.  Moreover, organizational decision making not only involves considerations of organizational ability but also the risks involved in any endeavor (Aven, 2008).  Risk analysis can aid decision support systems develop a comprehensive scenario analysis thus presenting with an accurate picture of the requirements and even options that a business has in seeking its operational goals.

Discussion
To develop a critical understanding of how DSS relates to modeling and risk analysis, the internal structure and functionality of DSS have to be brought to the fore.  A typical DSS is made up of a database, a decision making criteria or rather the model and a user interface that facilitates interaction with system functionalities.

It is noteworthy that this architecture explains the technical aspects, though people are an important component of a DSS considering that it is part of the overall information system (Tennent,  Friend, 2005).

The development of a DSS and even changes in its functionalities affect organizational operations.  Thus, the operations and management of a DSS are associated with multiple HR risks, for instance - resistance which can only be determined via extensive risk analysis.  From the technical architecture of a DSS, the decision making criteria or model plays a vital role in determining its functionality (Burstein,  Holsapple, 2008).

Development of an effective model requires thorough consideration on business processes which can be facilitated by the use of business, system and conceptual models.  Modeling in general plays a vital role in ensuring that the functionalities of a DSS are in tandem with business requirement (Tennent,  Friend, 2005).  It is evident that various modeling concepts in developing decision support systems facilitate the formulation of an effective algorithm base.

Current DSS have knowledge development or learning capability due to incorporation of intelligent systems.  Therefore, as knowledge based DSS are used, they develop more trends thus improving their ability and knowledge of organizational processes and operational variables.  This in turn results in DSS that can facilitate modeling by presenting an accurate abstraction of an organization, trends in the external environment and possible lines of action that can be adopted by an organization in seeking its operational goals.  In this way, decision support systems facilitate the development of accurate models that aid organizational success.
Risk analysis is a core aspect in project management and general business operations.  To develop an effective DSS system, specific user requirements, security systems and existing systems have to be considered (Aven, 2008).  A major development like DSS not only affects the technical platform that supports an organizations information needs but may also affect how employees interact with the existing systems.  Such extensive changes are likely to be faced by both people and technical threats.  This is the main reason for the importance allocated to risk analysis in development of DSS.  Another aspect that highlights the importance of risk analysis in business operations is the fact that a DSS is an IT based structure thus it is affected by the multiple risks associated with rapid development in IT.

On the other hand, risk analysis is a complex process that requires collection of data from multiple resources, generations of trends, development of models and even programming (Aven, 2008).  Moreover, risks analysis may result in a scenario where a business has to choose from multiple competing strategies.  In such cases, DSS comes in hand due to its extensive information repository, ability to generate model, information processing capability and interactive nature that allows incorporation of information that may not be available in the central repository (Burstein,  Holsapple, 2008).  It is thus evident that risk analysis in an organization is facilitated by the existence of DSS.  Another important observation is the role of DSS in facilitating interaction between modeling and risk analysis.  Analysis of risk, especially calculation of the likelihood of their occurrences, and even optimization of resource usage under given constraints have to be modeled.  On the other hand, a good model must be appreciative of uncertainties and threats that exist in the internal and external operational environment. DSS provides an extensive platform where the interaction between modeling and risk analysis is error free and results in operations that are efficient and value laden.

DSS are gaining relevance in business due to multiple uncertainties that businesses face in their operations and increased need for efficiency.  Risk analysis and modeling are also gaining audience in organizations due to similar reasons.  However, this is not the only point of interaction between these important organizational operations.  The development of an extensive DSS system depends on how well the modeling and risk analysis aspects are handled.  On the other hand, an extensive DSS system supports modeling and risk analysis in an organizationg

Ocean pollution

Ocean pollution is one of the main problems affecting the global oceans. This pollution affects directly the organisms that live in the oceans and affects indirectly the human resources and health. Toxic wastes, oil spills and the dumping of several harmful materials are the chief sources of the pollutants that pollute the oceans. Noise as one of the ocean pollutants greatly affects the marine animals. Most of these animals especially fish and the marine mammals are highly sensitive to noise or any other form of sound. Underwater, it is possible for noise to travel for very long distances, covering large ocean regions and thus potentially preventing the marine animals that are sensitive to sound from hearing either their predators or prey, getting their way or linking with their mates. In fact, as a result of noise pollution in the oceans, the population of dolphins and whales has greatly reduced (Weilgart, Para 1).

Oil spillage is a major source of ocean pollution, and it has very adverse effects on the marine life. As a result of oil spillage in the oceans, the marine water is deprived of air circulation thus making the marine life to die in huge numbers due to lack of air to breath. The feeding system of the marine animals is also interfered with as the food sources are destroyed by this form of pollution. The toxic pollutants that are usually disposed off into the global oceans have very adverse effects both directly and indirectly. The marine life is affected directly by these toxic elements that are introduced into their habitat, making them to either die or to suffer from several diseases. Some of the toxic elements that are introduced into the oceans are consumed by the marine life, which are in turn consumed by human beings making them to suffer indirectly as a result of ocean pollution (Advameg, Inc, Para 4).

It is Safer to Live near Volcanoes than Areas Prone to Earthquakes

In southern Chile by the side of the Michinmahuida volcano is another volcano, the Chaiten.  Chaiten is a small caldera which is a crater formed from the collapse of a volcanic dome.  Its crater measures 3 kilometers wide which is cut across by a river on its southwest side.  This river that flows down to the Chaiten Bay to the Gulf of Corcovado.

On May of 2008, the once sleeping volcano became restive and showed signs of activity and impending major eruption. For more than 9,000 years the volcano was very quiet.  While the eruption was not expected, it did not come as a surprise to the residents around the area.  The visible signs of volcanic activity point to an eruption.  If I chose to locate my furniture business here and needed to evacuate, the best means would be by boat to the next town safely out of the danger zone.  My priority would be lives over property.

Sea transport would be best as it would be the fastest, easiest to navigate and could accommodate more people than land transport.  Air transport was rendered useless because of the plume of ash and steam which rose up to 55,000 feet, said to be twice the average altitude of a cruising plane.  Four thousand people from Chaiten had to be evacuated.  Ash fall affected places up to hundreds of kilometers away, forcing schools, roads and airports to close down and halt operations.  Argentina and Uruguay cancelled flights as airplanes sustained damage from volcanic ash-clouds.  This eruption caused health problems as well both in Chile and Argentina.  Chaiten is a high-threat volcano and this last eruption was explosive mainly because of its caldera-type formation.  Volcanic emissions such as ashes, lahars and pyroclastic materials can be disruptive to travel and agriculture for a period of time.  None were reported killed as people were safely evacuated before the volcano unleashed its fury.  The early signs, the eruption and the continuous monitoring lasted the most part of May.  Chile is into a program of close monitoring of high-threat volcanoes like Chaiten and continuously studying means to lessen the hazard and damage to life and property.  As a resident of the area, I can learn from the lessons of the past, keenly observe the signs of the volcano and listen to government alerts.  Volcanoes have a life of their own.  Chaiten have built up deposits that it released after more than 9,000 years.

On May 12, 2008, a powerful and extremely devastating earthquake had hit Hanwang in Sichuan, China.  Death toll was 71,000 of the dead and tapped in the rubble.  It was the worst to hit Sichuan in 30 years.  Earthquakes in China are a common occurrence.  The Sichuan earthquake is the deadliest since the Tangshan earthquake in 1975 and the strongest since the Chayu earthquake in 1950.  11 million lost their homes, agriculture and livestock loss was estimated to exceed US20 Billion.  The quakes magnitude is 7.5 at the Richter scale with the epicenter at 80 km west, northwest of Chengdu at a depth of 19km.

Unfortunately, there are no preemptive measures that people can take in the event of an earthquake. Earthquakes just hit and not even sophisticated and advanced observatories could accurately predict an earthquake.  The best that people can take are safety measures when caught in buildings or on the road.  The earthquake occurred when seismic activities were felt along the Longmenshan Fault.  The fault tore in 2 sections, one by 7 yards and the other by 4 yards.

I choose Chaiten over Sichuan.  Volcanoes have early warning signs which people can base important decisions on like evacuation and moving to safer places temporarily.  The damage is not as worse as in strong earthquakes and is of shorter duration.  People around volcano areas are conditioned with the possibilities of eruption and evacuation.  What is important is to emphasize to everybody that the safety of life and limb takes precedence over material things like business which may suffer a temporary setback, but all will be well again.  Besides, Chaiten volcano acted up after a long, long time.

No one can predict an earthquake and it can be scary and disruptive to people in earthquake-prone areas which the whole of China is.

Security is very subjective. There is often a balance that must be struck between protection and practicality. Compare and contrast quantitative and qualitative approaches to Risk Assessment. What are the advantagesdisadvantages to these approaches Is one better than another Why or why not

Answer

The qualitative data approach prioritizes risk in the financial aspect, and assets are prioritized by financial values. The results are used in facilitating risk management and the returns on the security investment. Results may be interpreted in the management specific angle wherein the monetary values and probabilities are expressed in a specific percentage value system. The accuracy also increases over time based on the recorded data or history as the organization gains more experience.

Qualitative data on the other hand enables the visibility of data and understanding of risk ranks. It is also a lot easier to reach consensus since it is not focused on quantifying threat frequency. It is also not necessary to determine financial assets values, making it easier to involve non-experts in security or computers.

Quantitative datas drawbacks would be on the subjectivity of the data as it will be based on the opinions of the participants. Also, reaching a credible results and consensus amongst the participants is time consuming, as well as calculations of the results. This type of process will require expertise.

Quantitative datas drawbacks, on the other hand depend on the insufficient differentiation between the important risks. Justifying investments in implementing control may also prove to be difficult because there is not basis for a cost-benefit analysis.

Quantitative data will be better in terms of logicality and practicality. It will not be subjective and will also be cheaper.

How do the evolving motivations of hackers and other information-criminals affect our perception of Risk Mitigation and Contingency Planning

Consider the CheckPoint case of August 2005 where the records of personal private information (PPI) of 145,000 consumers were inadvertantly released. How does the growing sophistication of the criminal influence our Risk Assessment decisions

Answer

The society has become more aware of risk mitigations and realized the importance of contingency planning having realized that being vulnerable may entail liabilities, especially to companies. Due to the sophisticated hacking methodologies nowadays, online purchases and online marketing and banking are developing their own individual systems to protect their consumers. It is also safe to assume that online users are also now more wary in giving away their personal information. Amazingly enough, the more sophisticated hackers are the richer software companies become as they develop software to counterfeit those of the hackers, as the need for it arises and market value increases.

ALTERNATE ENERGY SOURCES

The standard of living and the socio-econometric structure of any country can be directly related to the per-capita energy consumption, of that particular state. Today its a well known fact that conventional sources of energy, mainly consisting of fossil fuels like coal, natural gas, and petroleum products are eventually depleting with time, and would certainly be exhausted in near future. Further the damage these conventional sources cause to the environment through pollution has become a major concern today. Thus the global civilization has started to divert into new directions, to find out suitable technologies to replace the conventional energy systems with alternate sources of energy. Alternative energy sources include natural sources like energy derived from sun, wind, water, geothermal, and biomass. These sources are self sustaining and once used they are replaceable. They happen to be readily available since they occur naturally. Unfortunately these alternative renewable sources of energy have proved to be cheap only for small scale production of energy.

But when it comes to a larger scale, the extraction of energy by means of alternative energy sources becomes so expensive with our present technologies at hand, that the cost of the processes to extract energy in a large scale exceeds the cost of the energy extracted itself. It goes without saying that this is the very reason why the amount of energy produced from alternative sources are still not sufficient enough to meet demands in larger scale.  Hence still today the world is more interested in using conventional energy sources consisting of fossil fuels like coal, natural gas and oil to be the primary sources of energy because with far little expenditures in the process of extracting energy from them, huge amounts of energy can be produced, much cheaper than the former. Thus it is justifiable to state that the total expenses on energy extraction by means of alternate energy sources quite often exceed the quantity of energy obtained.

In fact there have been many attempts to find a suitable generic term to describe the whole range of technologies designed to tap the earths natural energy flows. Among all other terms the word renewable has gained the widest spread acceptance. This section of the thesis presents a glimpse of those technologies designed by us to harness these natural sources of energy. The store of fossil fuel resources in this living planet is getting exhausted constantly and hence at a certain point of time these conventional sources are predicted to be unable to keep pace with the constantly growing demand of energy by the human civilization. These fossil fuels are non replaceable asset, and hence it is certain that the supplies are going to cease at one point of time in future. Added to this, there stands predominant the pollution, caused by the continuous usage of these conventional sources over the years. While extracting energy from them, there prevails simultaneous release of extremely harmful by-products like carbon dioxide, carbon monoxide, sulphur  nitrous oxides, along with other toxic hydrocarbons, causing extreme damage to the our environment in the form of global warming and ozone layer depletion, whose impact would result into upcoming environmental disasters of extremely high intensities, sufficient enough to bring a complete end to the living planet.        

In order to deal or rather eliminate these consequences or the alternative energy sources have been considered as a perfect replacement. Such renewable sources happen to be the kind that once used it can be replaced. Once again they are total environment friendly in the sense that they do not pollute the atmosphere and hence produce cleaner and green energy. These energy sources that are thought to be better in terms of the pollution and cleanness of the energy produced have been classified as renewable energy sources which includes energy extracted from sun, wind, water, biomass, ocean, and geothermal sources, all of them readily available in nature itself.

Research Findings and Discussions 
Just as it has been explained in the introduction, there has been an ongoing kind-of shift from the old or the existing energy sources such as oil, coal, and natural gas in order to adapt these renewable energy sources. The renewable resources happen to be naturally availability they are easy to harness. Solar energy has the greatest potential of all the sources of renewable energy. Utilizing as low as 5  of this energy will amount to 50 times the amount, what the human civilization requires to cater all its needs.  The energy from the sun can be utilized both as thermal energy as well as electrical energy using photovoltaic cells. Energy of wind can be economically used for the generation of electrical energy using the high wind velocity available A minimum wind speed of 3ms is needed to produce energy, and so coastal, hilly, and valley areas are most suitable to utilize wind energy. Geothermal Energy derives the heat from the center of the earth and is also having a high potential of generating energy. Energy from Oceans can also be tapped in the form of wave, tidal or ocean thermal energy. Biomass is yet another renewable source of energy in the form of wood, agricultural residues, etc. These organic wastes can be burnt directly to extract the energy or specially designed biogas plants can be used to decompose the biomass and the gas extracted may even be used to run automobiles. This particular source of alternate energy is highly effective for rural and remote areas where supply of electricity is yet to reach.

Renewable Energy Sources- Government Policies  Public Opinions    
Nowadays we can see that, government do everything possible to make us believe that alternative energy is our future. We see special taxes, laws, conditions for individuals, who will use or produce green energy. Present president of USA, Mr. Barack Obama, is supporting green energy everywhere, from financing renewable energy industry, till giving this industry own territories for building factories. The present U.S. president approved 3.6 trillion budget for next fiscal year, in context to development of superior technologies to harness the renewable sources more effectively. According to the United States Government the best choice to be would be to become the worlds leading exporter of clean energy. USA will certainly lead the world in creating new energy sources. The present Obamas government has sanctioned the largest investment in basic research funding in the American history for green energy research. Its no doubt a great initiative towards renewable energy and ecology on all levels. The government, like in the case of the United States looks forward to adopt such sources of energy and as a result the government is aiming at making subsidies and incentives for such-large scale energy production.

Thus it has become evident that the public is equally concerned about the use of renewable energy sources, with just a very few objecting the idea. The publics concern has been on the issue of pollution that has led to climate change along with the fact that the non renewable sources cannot be replaced once they are used. Everybody today having some basic awareness is considering alternative energy as a great opportunity to save expenses and help ecology to survive, and getting balanced, in countries around the world. Americans are seen as totally oblivious to the issues of global warming, and energy and food crisis. Recent research only supports statement about citizens beliefs about energy. When asked if its important for the U.S. to develop and use solar energy, 92 of Americans responded yes, according to a recent survey by Kelton Research. One such question asked, was what sort of renewable energy resource they would prefer the most as an alternate energy source, and the answer was solar - 43 percent, wind - 17 percent, natural gas - 12 percent, nuclear - 10 percent. This is to bring into view that with time, these sources of energy will be depleted and thus with time there will be no source of energy. In order to safeguard the world from expected true depletion of non-renewable sources of energy, of value and considerations the use of renewable sources of energy. Energy essence of great importance and in simple terms, its availability should be unlimited. In this regard, it has been noted with a lot of interest that most of renewable energy sources are too expensive to fully utilize them. For example, thinking of oil prices is an issue that has brought a lot of concern in that the prices are too high. On the other hand in citing the example of the solar energy, its a source which is naturally available and that too free of cost. The only one time cost involved is to set up the infrastructure to harness the energy and then use it for lifetime without spending a dime. the in the market that the prices attached to it will be high and unaffordable. Another point to note is that the renewable sources of energy will never run out of stock and that their affordability makes lifestyle better in the sense that they are readily available and affordable to majority of the population. It is evident that out of many advantages attached to the use of alternative energy to make life better the most important point to note is that renewable sources of energy do not pollute environment as it he case of the non-renewable sources. They are not associated with the production of carbon dioxide resulting in global warming in particular, leading to many problems especially related-pith climate change that has highly contributed to economic recession globally. Public concern over the effects of climate change and global warming has fuel opinion to favor the use of alternative sources of energy.

The other important factor is that the renewable sources of energy are best suited in the rural areas and remote areas. In this sense, the amount of energy produced is only suitable for small amount of work to be accomplished. At the same time, the technologies applied to harness the energy from these resources are affordable to many in the rural areas and remote areas whereby the lives of people are improved.

Considering the cost of total energy produced from the renewable sources the cost of resources and the technologies used for harnessing the energy, exceeds in most cases. Taking the example of wind power, it is of a requirement that wind mills, wind turbines, wind pumps should be installed in large extents of land, which indeed becomes very costly in order to generate energy in a large scale fit for industrial use, since industries require a lot of electric energy that the wind mills cannot produce if not installed in large extents land areas.
At the same time, the use of solar energy is very economical in the sense that solar radiations are naturally occurring phenomena. However, the technology that calls for the harnessing of solar energy in large scale proves to be very expensive. Following these arguments, it is clear that the renewable energy sources are difficult to harness and expensive to produce for large scale use. Thus it becomes difficult to depend entirely on the renewable sources as energy sources since the amount that is produced by means of renewable sources is yet to exceed the amount invested in the installation of the technologies, thus leading to an imbalance that may jeopardize the economy globally. Other disadvantages prevailing includes extensive amount of land being used in the case of wind mills, including killing of birds by vanes of the wind mills.

Conclusion and Recommendations
So under such a present circumstance, the best feasible solution available to mankind to sustain life and prosper on this planet is to harness these renewable sources of energy through highest possible effective ways effective in the sense of cost of generation, effective in the sense of usability and effective in the sense of every other way to support the growth of the global civilization. Demand for energy will eventually continue to grow up rapidly even if governments adopt vigorous policies to conserve energy. This in turn would create demands for more new energy resources, and compatible technologies to harness the same with utmost vigor. Thus to conclude, in future, the alternate or renewable sources of energy will definitely have to meet the energy crisis which have already started posing to be a threat to the human civilization. So it needs adequate amount of investments, and sufficient time durations to develop effective and stable technologies to utilize these sources in such a proper way, that at some point of time the renewable sources of energy will be sufficient enough to replace the conventional energy sources slowly but steadily. From the above discussion on the issue that total expenses on energy extraction by means of alternative energy sources quite often exceed the quantity of energy obtained, there emerges a great challenge on the adoption of alternative energy sources. It is therefore recommendable to make a development in the technologies that are employed in harnessing these renewable sources effectively and economically. Once these technologies are fully developed, then it will be easy to obtain a large output of energy that will prove to be profitable.

Nonetheless, the advantages far much outweigh the disadvantages. Today almost in every developed and developing economies around the world, scientists and engineers are working constantly towards research, development, design and successful deployment of superior energy conversion technologies able to run on these renewable sources. Thus finally it can be said that the most primary task of the world at this present hour is to develop proper strategies to manage The Transition from dependence on conventional fuels, to greater reliance on other sources of energy, specifically the green renewable sources, which are available freely in nature, and would continue to be available for our future generations in the forthcoming decades.      
The standard of living and the socio-econometric structure of any country can be directly related to the per-capita energy consumption, of that particular state. Today its a well known fact that conventional sources of energy, mainly consisting of fossil fuels like coal, natural gas, and petroleum products are eventually depleting with time, and would certainly be exhausted in near future. These fossil fuels are non replaceable asset, and hence it is certain that the supplies are going to cease at one point of time in future. Added to this, there stands predominant the pollution, caused by the continuous usage of these conventional sources over the years. While extracting energy from them, there prevails simultaneous release of extremely harmful by-products like carbon dioxide, carbon monoxide, sulphur  nitrous oxides, along with other toxic hydrocarbons, causing extreme damage to the our environment in the form of global warming and ozone layer depletion, whose impact would result into upcoming environmental disasters of extremely high intensities, sufficient enough to bring a complete end to the living planet.  In order to deal or rather eliminate these consequences or the alternative energy sources have been considered as a perfect replacement. Non-Conventional Energy Sources or Renewable energy derives its name from the fact that the sources of the energy are self sustaining. They happen to be readily available since they occur naturally. Hence there is no such threat of such energy sources to get exhausted with continuous usage.  Moreover energy harnessed from these alternate sources is eco-friendly, and hence are also termed as green energy. Sources that come under this category are Solar Energy, Wind Energy, Geothermal Energy, Biomass Energy, and Energy from Oceans in the form of Wave Energy, Tidal energy and Ocean Thermal Energy.  Though these energy sources have no threats of getting exhausted like conventional fossil fuels but still when it comes of producing energy from renewable energy sources in a large scale, the process and implemented technologies to extract of energy from alternative energy sources becomes quite expensive while compared to the cost of the extracted energy. Hence its a key factor that explains the very reason why the amount of energy produced from alternative sources are still not sufficient enough to meet demands in larger scale. However nowadays we can see that, government is doing everything possible to make us believe that alternative energy is our future. We see special taxes, laws, conditions for individuals, who will use or produce green energy. Present president of USA, Mr. Barack Obama, is supporting green energy everywhere, from financing renewable energy industry, till giving this industry own territories for building factories. The present U.S. president approved 3.6 trillion budget for next fiscal year, in context to development of superior technologies to harness the renewable sources more effectively.  Its no doubt a great initiative towards renewable energy and ecology on all levels. The government, like in the case of the United States looks forward to adopt such sources of energy and as a result the government is aiming at making subsidies and incentives for such-large scale energy production.  Eventually the public in general is becoming equally concerned about the use of renewable energy sources, with just a very few objecting the idea. The publics concern has been on the issue of pollution that has led to climate change along with the fact that the non renewable sources cannot be replaced once they are used. Everybody today having some basic awareness is considering alternative energy as a great opportunity to save expenses and help ecology to survive, and getting balanced, in countries around the world. Americans are seen as totally oblivious to the issues of global warming, and energy and food crisis. Recent research only supports statement about citizens beliefs about energy. When asked if its important for the U.S. to develop and use solar energy, 92 of Americans responded yes, according to a recent survey by Kelton Research.

Nonetheless, the advantages far much outweigh the disadvantages. Today almost in every developed and developing economies around the world, scientists and engineers are working constantly towards research, development, design and successful deployment of superior energy conversion technologies able to run on these renewable sources. Thus finally it can be said that the most primary task of the world at this present hour is to develop proper strategies to manage The Transition from dependence on conventional fuels, to greater reliance on other sources of energy, specifically the green renewable sources, which are available freely in nature, and would continue to be available for our future generations in the forthcoming decades.      

Technology Integration in the Lessons Part II

1. The purpose of integrating technology into the lessons you saw -- what were some of the benefits
The purpose of integrating technology into the lessons is basically to help the students develop their potentials and maximize their capacities to learn more and excel in school. Generally, technology integration motivates the students to go to school and learn the lessons well because of this new interesting approach in the learning method. Some of its benefits are largely seen in those students with disabilities since it enables them to easily cope up to the lessons. Primarily, through the aid of modern technology into their learning process, they function and live like any normal student do. Another benefit of technology integration is that it is enjoyable to do. Hence, a lot of students are taking interest in it. According to once professor in the video, this approach not just decreases the number of drop-outs in schools but it also enhances the students to learn something fun and different.

2. The computer experience you have had that might be similar to what you saw.
In my own experience, a teacher in one of my subjects assigned us to report in class the history of our school. My group mates and I made of use Powerpoint presentation to narrate the schools history. We also utilized pictures and video clips to make the report interesting. At the end of the day, we made our teachers and our classmates impressed of our output.

3. The aspects of the technology integration that interested you most -- what might you like to be able to do as a teacher
Personally, the aspect of technology integration that interests me most is the aspect where the elementary teacher in Tolenas Elementary School let her young students to explore and document their town using camcorders and computers. For me, it is best to teach children to participate in school at a young age and I found it effective since her students indeed took the project seriously by dedicating their time and effort. As a teacher, what I would like to do is to motivate my students to learn by making the subject appealing and interesting to them. There are actually a lot of ways to do that and one of them is to incorporate in the learning method the use of modern technology.

This research paper aims to look at the developments in the field of informational and communication technology with special emphasis on the introduction of the computer. The information age is structured on the technical advancements in electrical and electronic information management. The revolution in information management and communication started with the development of the telegraph which led to the transfer of electronic data at an instant. This was later expanded by the development of the telephone, the radio and the television. The introduction of the digital computer enhanced the management of information in very many ways. As the computer was advanced and refined, communication and processing technologies have been combined into a single network that has transformed the world. The paper looks at the historical development of the computer and the evolution of the internet age.

The term information technology is used to refer to an entire industry. At the present, information technology is the use of computers and software to manage data. It mainly refers to the computers and the electronic means used to get, process, communicate and keep information. However, the term is not new and has not always referred to issues concerned with computing alone. Information technology has been in existence for a very long time. We can consider information technology to be as old as the human brain. The most appropriate definition of information technology is the communication and storage of information including the processing and using of the stored information. People have always managed to communicate through the technology that has been in existence at a given time in history.

The history of information technology began way before the invention of the digital computers. The history of information technology can be divided into four main ages. The first was the pre-mechanical age which covers the period before 145 AD when people used to communicate through the use of language and simple drawings called the petroglyths which could be curved in rock. The pre-mechanical age was followed by the mechanical age covering the period between 1450 AD and around 1840 AD. During this period, a lot more technologies were developed like the mechanical calculators and the slide rule. The period was the link between the ancient period and the modern technological developments. The third age was the electromechanical age which lasted from about 1840 to 1940 and saw the first developments in the telecommunication industry. These include the telegraph, the telephone and the radio which led to tremendous advancements in the field of information technology. The last age is the electronic age covering the time from 1940 to the present and has been marked by the extensive use of computing machines.

Ages in the development of information technology

The Premechanical age 
The first digital calculator to be used by man was probably the technique to count using the fingers. The abstract notion of utilizing stones and sticks to represent amounts marked the beginning of arithmetic and the number system as we know it today. The stones could be used to add or subtract very fast and they were later strung as beads on sticks to form the abacus. In this simple machine, each wire with the beads is used to represent a positional number and can be used to add or subtract numbers. Multiplication could be achieved through repeated addition.

The first form of communication was only through speaking and picture drawings. The first form of writing called the cuneiform was developed by the Sumerians of Mesopotamia at around 3000 BC. They used to write on stone tablets using a stylus that could scratch the marks that could be preserved after the clay is dried. Later, other materials for writing like the skin parchments and papyrus reeds were introduced. The early writings were mainly books containing religious instruction. Another form of writing developed in Egypt known as hereographic which was mainly done on scrolls. As the alphabets became more popular, there was need to develop pens and paper due to the increased need for writing. The Chinese invented the first paper which was made from rags at around 100 AD and formed the basis of the modern paper making technology. By 2000 BC, the Phoenician had developed symbols which became the very first alphabets. The Phoenician symbols were later adopted by the Greeks who also introduced the vowels. The Romans gave the Greek alphabet the Latin names which are in use today. The first numbering systems were developed in Egypt and India. The Egyptian system comprised vertical lines to represent the numbers 1 to 9 while ten was a U or a circle, one hundred as a coiled rope and one thousand as a lotus blossom. The modern numbering system was created by Indians who invented the nine-digit system. The most profound invention during the premechanical period was the first calculator called the Abacus which was the first information processor.

Mechanical age  
This period was marked by the first information explosion after the invention of the movable metal-type printing process by Johann Gutenberg of Germany. This led to the development of the book indexes and the increased use of page numbers. The very first example of the analog computer was the slide rule invented by William Oughtred, an English clergyman during the early period of the 17th century.  The slide rule used logarithms to simplify multiplication and division. The numbers on the slide rule are represented on a movable scale and do not stand for discrete values. Multiplication and addition is done through addition and subtraction of distances on the scale. The Pascaline was invented by Blaise Pascal while Charles Babbage developed the difference machine. The pascaline was a very popular mechanical computer while the difference machine could tabulate polynomial equations by applying the method of finite differences.

Babbage also developed the Analytical engine which was the first general purpose programmable computer. The analytical machine would have been capable of executing any arithmetic operation and could be programmed using punched cards. His work was never completed since there was no urgent need for such a machine at the time. The design was rather too complicated for the heavily mechanical that the inventor had envisioned. He was basically ahead of his time since the design was later used to develop the modern computer. The other important invention in this period was the Leibnizs machine developed by Wilhelm Von Leibniz, a German mathematician and philosopher.  There was also the Jacquard loom made by Joseph Marie Jacquards which had parts similar to the modern computer including the store, the mill and the punch cards. Although most of the machines developed during this period may not look effective today, they were huge inventions of the time.

Electromechanical age
During this period, the discovery of electricity was the main invention as information could now be converted into electrical signals. The period saw the beginning of the telecommunication with the development of the telegraph at the start of the 19th century. In 1837, Samuel Morse developed the electromagnetic telegraph using his previous edition of the Morse code. The first message was transmitted through an experimental line from Washington to Baltimore in 1854. By 1866, the first telegraph line, the Atlantic cable, had been developed to link the United States and Europe. In 1876, Alexander Graham Bell invented the telephone which was then combined with electric telegraphy and the two inventions operated together for many years. In 1894, Guglielmo Marconi invented the radio based on the idea that electric waves can travel through space and produce an effect at a different point. The inventions that occurred during the electromechanical period led to considerable advancements in the information technology industry.

The period saw the beginning of electromechanical computing after the development of the census machine by Herman Hollerith at the International Business Machines (IBM) Company.  The data was kept on punched cards which were then processed by collating, sorting and adding the data.  The group also developed the Hollerith code used for encoding alphanumeric data as punched holes in a card. A team of led by Howard Aiken at Harvard developed the electromagnetic Automatic Sequence Controlled Calculator (ASCC or Mark 1), which was based on the idea of the earlier Analytic machine made by Babbage.  It was made from switches, electromechnical relays, moving shafts and clutches. It comprised more than 750,000 parts and close to 500 miles of wire and measuring about 50 ft in length, 8 ft in height and weighing about five tons. It was made to meet the heavy computational needs of the Second World War and could work at hundred times the speed of the human brain. It was the first large scale automatic and truly programmable digital machine. It was programmed using punch cards and formed the basis of future attempts to reduce the size of the computing machines.

Electronic age
Since the 1940s, the information technology industry has seen tremendous advancements especially with developments in digital computing. The first invention in this age was the Electronic Numerical Integrator and Computer (ENIAC) developed in 1946 by a team at the University of Pennsylvania led by Pres Eckert and John Mauchy. The ENIAC used vacuum tubes as opposed to the relays (mechanical devices) used in the Mark 1. It was six times heavier than the Mark 1 but could work 1000 times faster than the former (100,000 times faster than the human brain). Its main setback was that it consumed a lot of electricity and could not store its programs.  After this, researchers began to work on developing stored program computers. In the 1940s, scientist at the Manchester University began to develop the Electronic Discreet variable Computer (EDVAC) and by 1948, they had developed the Manchester Mark 1 which was the first stored-program computer as a prototype. In 1949, Maurice Wilkies at the Cambridge University developed the Electronic Delay Storage Calculator (EDSAC) which became the first usable stored program computer. At the beginning of the 1950s, another computer called the Universal Automatic Computer (UNIVAC) was developed for commercial use. These marked the beginning of the four generations of Digital Computing.

Computer generations 

The first generation (1951 to 1958)
The first generation computers used vacuum tubes as the only logic element and punch cards were used to enter data and for external storage of data. They also had rotating magnetic drums that helped to store data and programs internally. The magnetic drums led to faster accessibility to stored information than the punched cards. The programs were written in Machine language and Assembly language thus required a compiler to translate. This was the lowest programming language the computers could understand in order to perform operations hence they could only solve a single problem at a time. The punched cards and paper tape were used to enter data while the output was produced as printed copies. The first generation computers were very huge occupying entire rooms. They were too expensive to work with and consumed a lot of electricity thus generating a lot of heat. Examples of first generation computers include the UNIVAC and ENIAC.

Mass production of computers began when the IBM started to produce the 650 magnetic drum calculators. The Semi-Automatic ground Environment (SAGE) was used to connect hundreds of radar stations and marked the beginning of expansive computer communications links. The period between 1945 and 1960 was marked by increased computation even though computers were generally inaccessible to many people.

The second Generation (1959 to 1963)
The computer in this period had the vacuum tubes replaced by transistors as the logic elements. The transistors were made from crystalline stone materials known as semiconductors which were superior to the vacuum tubes and helped to make the machines smaller, faster, less expensive, and more reliable. In the external storage of information, the punched cards were replaced by magnetic tapes and diskettes. The internal storage was now composed of magnetic cores which could be polarized in any one of the directions to store data. The period also experienced the development of High-Level programming languages like FORTRAN and COBOL which allowed the programs to be written in words.

The third Generation (1964 to 1979)
In this period, the individual transistors were replaced with integrated circuits and the punch cards were completely phased out in favor of the magnetic tapes and disks as external storage media. The information is stored as magnetic pulses in tracks around metal cylinders and the readwrite heads recorded or recovered the data. The internal storage system began to utilize metal oxide semiconductors (MOS) instead of the magnetic Cores. The integrated circuits and the MOS used silicon-backed chips. The transistors helped to increase the speed and efficiency of the machines. The development of operating systems and superior programming languages like BASIC was underway. The third generation computers saw the introduction of keyboards and monitors to enhance user interaction with the operating system. This helped the computers to run several applications at the same time as the central program supervised the memory.

The Fourth Generation, (1979 to the present).
In this era, there has been development of large-scale Circuits (LSIs) and very large scale integrated circuits (VLSIs). The other notable development has been the introduction of microprocessors that contain memory, logic and control circuits (all form the Central Processing Unit CPU) on a single chip. As compared to the first generation technology where the machines could fill entire rooms, the microprocessors could just fit on the palm of the hand. The silicon chips could comprise all the components of the computer including the CPU, memory and the output and input control channels. The development reduced the size of the computer hardware considerably. This led to the development of the personal computers or PCs that could be used in homes. The Apple II personal computer was released in 1977 while the Apple Mac was developed in 1984. The IBM personal computer was released in 1981 and at the same time, the Microsoft Operating System was debuting in the market. The software product in the Forth Generation period has been unprecedented which include Microsoft, Lotus, UNIX and many others. From the 1980s, the graphic user interface (GUI), the mouse and other hand held devices were introduced and have been changing over the years. The small computers were made more powerful and they could be linked through a network to facilitate communication leading to the development of the internet.

The internet age
In the recent past, more and more people have become accustomed to spending more time on the internet. The internet has come to have a very powerful impact on the way people live and generate and use the information available on the internet. It has managed to revolutionalize the computer and global communication like nothing before.  The internet is a result of a long history of inventions which began with the development of the telegraph though the introduction of the general purpose programmable computers. The invention of the telegraph, the radio and the computer laid the foundation for the eventual integration of the different technologies. It has become a worldwide broadcasting tool, a means to release information and a channel for collaboration and relations among people with disregard of the geographical gaps.

The first computer communication network was the ARPANET founded in 1969 which became the first countrywide computer link and later developed to become the internet. The Department of Defense aimed to develop a way through which computers could communicate in order to enhance security.  The internet developed due to the visionary imagination of some experts who appreciated the possible value in using computers to share information on inventions and advancements in scientific and military spheres. The idea was proposed by J.C.K Licklider who talked about a global network of computers and moved to the Defense Advanced Research Projects (DAPRA) to initiate the project. Researchers at MIT had developed the theory of data communication especially packet switching from the early 1960s. By 1965, they had managed to put the theory in practice after using packets to initiate computer communication using telephone lines. By 1969, the internet which was then called ARPANET was used to connect four computers at different campuses in the southwest part of the United States. The first host-to-host message was transmitted over the ARPANET from UCLA to Stanford research institute on October 29, 1969 but the crushed before the process was completed. In 1970, the ARPANET covered the USA after the installation of a node at Bolt, Beranek and Newman (BBN) Corporation. The organization had won the contract to build the Interface Massage Processor (IMO) which was to be the basis of the program to link the computers. The devices were to be installed on each host and be part of the network between the computers by utilizing the packet-switching technology.  In 1973, the first international connections were established to Britain and the Transmission Control Protocol (TCP) was developed to provide network transportation and forwarding. The TCP was later divided into TCP and IP where the IP dealt with addresses and forwarding of specific packets while TCP controlled the flow and recovery of lost packets.

During the early years, the internet was only used by experts, engineers, scientists and library official and whoever used it had to learn the complex system.  The first public demonstration of the ARPANET occurred in 1972 at the International Computer Communication Conference (ICCC). In the same year, the electronic mail was introduced which was began as a way of creating easy communication among the ARPANET developers. The initial ARPANET later developed into the internet with the basic idea that would be independent networks of random designs. They began with the packet switching network and later incorporated packet satellite links, land-based radio networks and many other networks. The underlying technological idea is the open architecture networking where the selection of given network is not determined by the architecture but should be chosen by the provider and designed to work in conjunction with others via a metal-level internetworking architecture. In this case, the networks would connect at the circuit level and was called the Internet Control Protocol (NCP). This was later replaced by the Transmission Control Protocol and Internet Protocol because it lacked the ability to address networks.

During the 1980s, more networks were formed especially in the educational and commercial sector and wanted to utilize the technology developed by ARPA. The Computer Science Network (CSnet) was created by the Computer Science educational and commercial groups. The other was BITNET (Because Its Network), which was by all the other groups in the educational sector and was instrumental in the development and establishment of the World Wide Web in 1989.  The different networks signed agreements that allowed them to build links among them. The World Wide Web was designed to help in the location of files and other documents on the internet. The system assigns a similar sequence of addresses and hypertext links to all the data. The Hypertext is the arrangement of the information parts into links that users can make. The WWW documents and files were first availed on the internet as from 1991.  In the last two decades, the internet has been transformed tremendously as its use has become widespread and accessible to many people.

The computer was developed out of the need to solve computational problems which began with the development of simple mechanical devices like the Pascaline and the Analytical machine. After the invention of electricity, the mechanical devices were modified after further invention leading to the introduction of electromechanical devices. From the mid 20th century, major developments that were carried out made the computer faster, cheaper, smaller, more reliable and very efficient. The most profound development has been the breakthrough in enabling computers in regions of the world to communicate at an instant. This has revolutionalized the information technology sector as people in different parts of the world are able to access enormous amounts of information in the comfort of their homes. The management of information has become very effective as the computers are able to handle large amounts of information with a lot of ease. The future of information technology is poised for greater things as more research and innovation continues to take place.    

Software Project Plan - HR Management system.

Human resource Department is one of the sophisticated departments in an organization. The main objective of the HR department is to meet the organizational needs and the employee needs of the organization. To stay ahead in this competitive world, information technology must be adapted into the organization. HR department needs the information system to effectively manage its operations.

Goals and objectives
The main objective of HR management system is to automate the process of various HR activities. The goals of HRMS are as follows
To minimize the manual work in the HR activities
To speed up the various tasks and get accurate results
To reduce the paperwork needed.
To obtain timely results
System Scope
General Requirements

To track the various employees related information right from basic to extended information.

Payroll calculation for the employees and maintaining payroll related activities
Modules for recruitment managing. The module should cover the various aspects of recruitment
Employee grievance and suggestion module to take care of employees suggestions.

Training and Development related activities must also be incorporated into software.

Various other features including report generation, Document management system, centralized mailing and querying system, Leaving management and attendance tracking system, performance review and appraisal system.

Extended Requirements
Online integration of these systems with other systems in the company.

Web enabled support to access the software from any point of contact.

Automatic backup service.

System Context
HR management system is used by multiple employees simultaneously in an organization. So, the system should support concurrent access for the software.

Resources
The resources can be broadly classified into people and softwarehardware requirements. The number of people required to develop the project depends on the skill set the people possessing. On an average, to develop HR management system it requires 5 to 6 people. However, number of required people change according to size and complexity of the hr management system. The basic hardware configuration needed for successful execution of HR management system is
User side (client) Server
10GB Disk space 100GB Disk space
256MB RAM Minimum 1GB RAM
VGA Monitor Server side Operation system
Compatible operating system Backup systems
Keyboard and mouse Processor in GHz
Processor in MHz
Project Task Schedule

Major steps in project development are Planning, Requirement analysis and specification, Design, Coding, Integration and Testing and Maintenance.
Project Schedule
Planning 2 Weeks
Requirement analysis and specification 5 Weeks
Design 4 Weeks
Coding 6 Weeks
Integration 6 Weeks
Testing and Maintenance 3 Weeks

However, there could be little changes according to the man power and various other resources. Team members must cooperate among themselves for better results of the project.

Project Cost Estimate
The project cost is estimated by many methods. Some of the methods are COCOMO model, LOC based cost calculation etc. Let us consider the COCOMO Model. Two basic formulas are
Effort E  a KLOC b
Duration D  c E d
Each and every project can be classified into any of the existing categories. Each would have default values for a, b, c and d. Since our project is based on organic project, the values would be  a  2.4, b  1.05, c 2.5 and d  0.38. Let us assume there would be 6,000 KLOC in our project.
E  2.4(KLOC) 1.05
       2.4(6) 1.05
     H 15 person-months
D  2.5E 0.38
 2.5(15) 0.38
H 7 months
It indicates that with 2 members the project can be completed at 7 months of time.

Implementation stage of the Project
After completion of the design stage, the coding process begins. If there is an existing system, the coding should be inline with the existing system. The coding should be error free and various automated tools can be used for generate code from the previous steps.

Testing and Installation
System must undergo various levels of testing like functional testing and system testing. The test plan must be formed and must be executed properly. The test results must be reviewed appropriately to trigger the debugging procedures. After verification of testing, installation must be done at client side.

Training and Documentation
Each and every step in the project must be documented appropriately. Users must be trained to the new system. Online help manuals should be provided and help line must be there to solve the user queries.

Astronomy The Oldest Science

People have been looking up, trying to explain the universe for as long as there have been people (Greene, 2009).  Priests and other holy noblemen were the first astronomers, studying the movement of heavenly bodies so as to determine sacred celebrations in addition to agricultural cycles (Greene).  In fact, astronomy was practiced in various ancient civilizations, for example, by the ancient Chinese, the Mayans and the Harappans not only for the abovementioned reasons but also to predict the future and to orient cities.  Astrology and religion were combined with astronomy to provide the answers sought by ancient people from the heavens (The History of Astronomy, 2007).
   
Built between 3100-2000 BC, the Stonehenge of England appears to be an astronomical site of the Stone Age (Smith, 1999).  Lunar-solar calendars were created around 2000 B.C. in Mesopotamia and Egypt (Russell, 2008).  Ancient Greeks seem to have been the first to devise astronomical theories (Greene).  Around 280 BC, Aristarchus suggested that the earth is revolving around the sun.  He also provided an estimate for the distance between the earth and the sun.  Hipparchus developed a catalogue of more than 850 stars around 130 BC.  The solar calendar was introduced in the Roman Empire in 45 AD (Russell).  In 140 AD, Ptolemy wrote about the geocentric theory of the universe (Russell).  
   
The Baghdad School of astronomy was founded in Iraq in 813 AD.  In 1054 AD, Chinese astronomers reported having observed a Taurus supernova.  Egypt built an observatory in 1120 AD, and Iran constructed its own in 1259 AD (Russell).  In 1543, Copernicus published his heliocentric theory of the universe (Russell).  Around thirty years later, Tycho Brahe reported his observation of a supernova.  In 1609 AD, Galileo used the telescope to observe heavenly bodies he discovered the Milky Way among other things.  Since that time, the profession of astronomy has evolved by leaps and bounds, consistently adding new theories and new observations to the human knowledge base.  Technologies to observe heavenly bodies have also been evolving.  Moreover, spacecrafts are being built and consistently improved upon.  In the year 1961 AD, Yuri Gagarin became the first human being in space (Russell).
   
Man has visited the moon by now, and continues to develop new technologies and instruments to deepen his understanding of the universe.  In the year 2000 AD, astronomers reported having found new evidence for water on the planet, Mars (Russell).  Today, new instruments such as the Herschels Spectral and Photometric Imaging Receiver (SPIRE) are busy taking photographs of celestial objects to answer countless questions posed by astronomers (Herschel Images Promise Bright Future for Astronomy, 2009).  Also in future, astronomers expect the real nature of dark energy and dark matter to be understood.  They are further looking for extraterrestrial life on extrasolar planets.  What is more, astronomers would like to identify the first stars to have formed right after the Big Bang the James Webb Space Telescope is going to be launched in the year 2013 AD for this reason (Winters, 2009).
   
The future of astronomy is bright, indeed.  Astronomers are planning mass international collaboration to fund and manage astronomical facilities (Winters).  Winters describes such facilities as large and expensive.  In fact, even big and expensive telescopes call for international collaboration, an example of which is the Atacama Large Millimeter Array (Winters).  This facility is being constructed in Chile, engaging astronomers from Japan, United States and United Kingdom (Winters).  Astronomers are further looking forward to the construction of the European Southern Observatorys European Extremely Large Telescope, the worlds biggest eye on the sky (Winters).  After all, there is no dearth of knowledge in the universe, and astronomers have an insatiable appetite for this information.