Website Plan (3rd Section)
One of the many factors which will determine the success of a particular website are the media formats and the plug-ins embedded in the website.

Below are the necessary media formats and the necessary plug-ins for a website
Java
Acrobat Reader
QuickTime
Windows Media Player
Adobe Flash

Graphics are equally important in websites. They do not only aesthetics purposes for websites to look good but also make the website easy to use due to illustrative characters.

The following are ways on how graphics must be used in a website

Graphics must be used to maximize and enhance users experience.

Graphics must have suitability to the website. This means that graphics must be related to the website content.

Ensure that graphics are informative to successfully convey the message of the website.
Graphics must not be distractive. Some website with superfluous graphics tends to only entertain users but do not inform.

Graphics must not be culturally offensive. Since a website can be access almost everybody, graphics must promote and instill moral values.

The content provides life to every website. This is perhaps the most important component of all websites. The goal of the website content is critical in the success of every website. For instance, an owner of the website is selling a particular product. The salability of the product will largely depend on how it is described in the website. The website content in this case should be able to highlight the importance of this particular product. The content must successfully convey the advantages of owning the product and how owning this can improve your way of living.

Obtaining content can be done in various ways as detailed below.

First and foremost, one must know what topic andor product is the person going to promote in the planned website.

Study the topic andor the product comprehensively. One must know the topic andor the product from all views.

Refer to other websites related to the topic andor the product you are showcasing.
Conduct interviews when necessary as this adds to the weight of the website content.
Seek assistance from professional service providers to enhance website content.

After having the content put in place, it is very essential and important that the owner of the website must obtain exclusive rights for it. This will uphold and protect intellectual rights over the website and it cuts across everything in the website such as website content and the graphics and design of the website if they are originally conceptualized.

The website design must have emphasis on its usability and accessibility. The users of the website must at all times regard the site as user-friendly. The site must be navigated and operated with ease allowing all users to appreciate the website itself. The sites design must see to it that learning is facilitated and not hampered.
Accessibility of the website is another important concern for this is a way to promote your website. It must be accessible from any parts of the world to reach target audiences.    

Site purpose

The purpose of the website would be to educate visitors on different programming languages. Irrespective of a persons knowledge level in programming, the website would be able to provide the information necessary to complete a real project or a student assignment. This can be accomplished by following three different strategies.

Tutorials
FAQ section listing out common queries and solutions
User Forum  

Target audience
The audience for the website includes students, aspiring programmers, professionals and even experienced veterans in the software industry. The website would help the visitors learn new technologies as well as advance their knowledge in a specific programming language. The website would also serve as a forum for programming enthusiasts to share ideas and help out each other.  

Content needs
The content of the website should be able to cater to a diverse group of audience with different levels of mastery in various programming languages. This would include an array of programming languages pertaining to webpage creation, database development and application software development on various platforms. The websites backbone would the step-by-by tutorials on how to code to create programs using a specific language (Programming Tutorials, 2010). Providing adequate examples of programs and illustrations would also be necessary to help the user understand the tutorial better (W3Schools, 2010). The FAQ section would be constantly updated by scouring through Internet forums discussing programming queries and solutions.

The content should be organized and made easily accessible to the users, to ensure repeat visitor and referrals (World Wide Web Consortium, 2010). This can be accomplished by allocating separate pages for every programming language, while the link to those pages would reside on the homepage of the website. The programming languages would be divided based on their application. For instance, Visual Basic would come under application software, while HTML would be listed under webpage development software.

Website Purpose and Architecture Plan

I. Profession  Computer Programming
Computer Programming of today is now almost essentially the same with Internet programming. To take benefit of the interconnectivity that the World Wide Web offers, software developers are now on their way of finishing what they call as the soon to be popular Webtop. This would replace the peoples accustomed to used Desktop and thus making all of the computer works such as word processing and spreadsheets all web based. So as to promote the use of web application, as a computer programmer I would like to create a website that would migrate the development, compilation, running and storage of software online.

II. Review of Related Literature
Formerly, before a programmer can deploy a software application, he should first develop it through his command prompt editor. Programmers noticed the need for a better text editor that has the capable for undo and redo. As a comparison, for an HTML file to be deployed as part of a website, it should first be developed in a text editor. Programmer then developed a way on how to develop it faster through the use of what you see is what you get (WYSIWYG) editors. These editors are equipped with drag and drop functionality for its web design view, while it has a color coded html editors at its code view. Some developers nowadays are bringing these WYSIWYG editors through blog sites. It is now then easy enough to develop an HTML website through their built in templates. Through the use of Asynchronous Javascript and XML (AJAX), dynamics of website building is as easy as one click which is far better than the old way of develop locally then upload to the web to deploy. Image editors are also now available online that would further enrich ones applications. Since the trend for new computer notebooks is to lessen its hardware cost (such as limiting hard disk capacity) while strengthening on its portability, and quick loading of internet connection, it is great that almost all applications we used to wait for a long installation in our personal computers operating system are now available at our fingertips through a simple filling out of a correct URL in an address bar of a browser.

III. Computer programming website plan for implementation
Since the trend for web based applications that would do the same functionality as the applications we used to enjoy at our desktop, are getting faster everyday because of software developers urge to bring out the best for its clients, I would like to create a web site that would cater the development of these kinds of software. The website should be able to register interested developers and then right after confirmation of their email account, they could access the rich features my website offers like an online WYSIWYG HTML editor that has the same drag and drop functionality, an online source code editor that has the color coding, parsing, compiling and running system like the most Integrated Development Environment offers. The project seems to be a big one but with the use of AJAX and PHP server side scripting, we could emulate what offline WYSIWYG and IDE editors do.

The website if ever I could still implement it, would try to set up also a forum for its end user software developers that would create a way on how they could share codes and information that would be helpful for their respective projets.

IV. Materials
Hypertext markup language (HTML)
PHP Hypertext Preprocessor (PHP)
AJAX and Javascript
Cascading Style Sheets
MySQL for database

The Solar System in Bill Brysons A Short History of Everything

Generally we have a poor conception of the scale of outer space. In his discussion of our solar system, Bryson makes an effort to give us a notion of the vastness of it all. The overly simplistic model of the sun and its nine planets that we usually come across can be very misleading in the pictures and models, the planets are depicted as being one next to the other more or less evenly spaced. But this is quite far from reality. For example, Neptune is actually five times further away from Jupiter as Jupiter is from us, receiving only 3 of the sunlight Jupiter receives. Even if we compress the giant gas planet Jupiter to the size of a dot, Pluto would still be 35 feet from the Earth in the scale model. Pluto is so far away that the sun would appear only as only a faint dot from it.

If Earth was the size of the pea, going by the models we are used to we would imagine Jupiter to be a big ball at the most a couple of feet away but it would be in fact 1000 feet away (size unspecified by author). And Pluto would be a mile and a half away, at about the size of a microscopic germ Again, going by our commonsense perception, we would place the nearest star in this model at, say, at the most a hundred miles away  but we would be grossly mistaken. The nearest star is not even a thousand miles away, but ten thousand The distances are mind-boggling.

The solar system  implying the realm held by suns gravity  does not end at around Pluto, but in fact extends half way up to our nearest star, Proxima Centauri, which is about 4 light years away. Pluto happens to be only one-fifty-thousandth of the way to the outer edge of our solar system. Beyond Pluto lie two vast zones of icy lumps, comets, and a variety of cosmic detritus revolving around the sun, known as the Kuiper Belt and the Oort Cloud. It is rather strange to think that these cosmic bodies revolve around a star that would be practically invisible from their vantage point, but such is the vastness of our solar system. The twin Voyager space probes, launched in the mid seventies and which are currently way past Pluto, would reach the Oort Cloud in about a ten thousand years.

Bryson states that it is inconceivable that humans would ever reach to distances so far, no matter how sophisticated our technology becomes. By implication, he rules out any journey to the stars. That is a depressing thought, nevertheless we are making a good progress in our explorations of outer space. Even within the next twenty years we are likely to discover a good deal more about the solar system and our place in the universe.

Bryson even speaks lightly about the idea of a manned mission to Mars, stating that the costs are formidable and there has not been found a solution yet for shielding the astronauts from the deadly solar radiation. But this is very much an erroneous observation. Back in 1997 itself, Robert Zubrin, the President of the Mars society, published a book delineating all the aspects of a manned mission to Mars and how it can be achieved within a fraction of NASAs projected spending. In the past few years America has reaffirmed its commitment to space exploration by proposing a manned mission to Moon in the coming decade and to Mars by 2030.

We already possess quite an advanced level of technology for observation and exploration of space, and it is growing rapidly, regardless space is an extremely challenging frontier indeed. As Bryson repeatedly emphasizes, space  and even our solar system which we would like to think of as our immediate neighborhood in the gargantuan stretches of our galaxy  is vast beyond conception. We have telescopes powerful enough to see from the earth if an astronaut lit a torch on the moon, but we still do not know definitively how many moons are there in our solar system. Bryson remarks that during his childhood it was thought that the solar system had 30 moons in all, but at the writing of the book it was at least 90. Since then though that number almost doubled, reaching now to 170 moons (The Planetary Society). This is largely thanks to the unmanned space probes we have sent in the recent years, such as Galileo and Cassini. NASA is going to send a few more space probes and telescopes in the coming decade, and they would be greatly expanding our knowledge of the solar system and beyond.  

Bryson notes that the astronomer who championed the existence of Pluto before its discovery in 1930 was expecting a really huge planet beyond Neptune, and Pluto turned out to be quite its opposite. However it could still be possible to discover a really huge planet, bigger than Jupiter and almost a twin star to the sun out there in the emptiness beyond Pluto. Within the next ten to twenty years, we can confirm the existence of such dark bodies in our solar system one way or other and can perhaps finalize our picture of the solar system. Many surprises could be in store for us.

Science

One of the main compositions of the ocean floor is the seafloor sediments. There are four types of seafloor sediments according to composition lithogenous, biogenous, hydrogenous and cosmogenous.

Lithogenous sediments are the type of sediments derived from rocks. These are generally carried off to the ocean by wind, rivers, rainwater run-offs and water currents. Its sizes vary widely. Larger lithogenous sediments are heavy, sink faster, and thus, settle closer to lands. On the other hand, some of the smaller ones get carried out in the middle of the ocean because of its lightness.

Sediments obtained from living organisms are called biogenous sediments or oozes. Calcareous, siliceous and phosphatic are the three general kinds of biogenous sediments. Calcareous oozes are primarily consist of calcium carbonate shells and may form chalk upon settlement. Examples of calcareous organisms are cocoliths and foraminifera and they tend to be found in warm and tropical regions. Its distribution is greatly affected by the temperature of the ocean, population of microorganisms and mixing with lithogenous materials on sea floor. On the other hand, siliceous oozes are made out of silica shells. They are mostly cam from organisms like diatoms and radiolarian that mostly seen in polar and equatorial regions. Lastly, phosphatic sediments came from the teeth, bones and scales of fishes.

Hydrogenous sediments are derived from ions dissolved in the ocean that precipitates. These types of sediments are the less common than biogenous and lithogenous sediments. Its abundance is affected by change in temperature, change in pressure and addition of chemically active fluids. Types of hydrogenous sediments include ooids, evaporative salts and metal sulfides.

Cosmogenous are sediments that formed from materials from outer space. This includes cosmic dusts and unburned parts of meteorites. This kind of sediments is the least common and comprise only small fraction of ocean sediments.

Generally, biogenous and lithogenous sediments tend to dominate the ocean floor. However, its distribution is greatly affected by temperature and water depth. Calcareous type of biogenous sediments tends to accumulate in shallow, temperate regions because they dissolve in warm water slower. In contrast, siliceous type, which resides around the equator and in Polar Regions, dissolves slowly in cold water or in upwelling zones. Lastly, lithogenous sediments are most likely found in areas that are both deep and distant from land. An example of it will be the abyssal clay.

In many parts of the world, coastal erosion is becoming a large problem. To address to this problem, several techniques for preventing coastal erosion were devised. One of which is the so called French drain. It is a narrow trench filled with sand and gravel. The effectiveness of this technique is based on the amount of water that it can intercept. This technique provides a permanent solution to erosion. Another permanent but very practical method is allowing the natural vegetation of the shore undisturbed. The roots of the vegetation hold the soil and therefore minimize transport of sediments. Shore can also be protected from erosion by lining it with rocks. This technique is generally called rip-rapping.

Even though man can help in decreasing the rate of coastal erosion, some of mans works fasten the process. This is the case for the development of coastal areas. This development, such as roads, can increase the run-off of water by minimizing the contact between water and soil. Since grater run-off results to greater erosion, this action by man also increases coastal erosion. In some cases, the method used to decrease coastal erosion becomes the accelerator of erosion. This can happen to coast provided with walls. When waves collide with the wall, it becomes turbulent. This turbulence hastens erosion.

Complete Data Structure and How They Are Used

A data structure is a collection of variables, may be of different data types, connected in various ways. This includes the simple array and complex ones like linked lists, stacks, queues, graphs, trees, adjacency matrix, etc. Complex data structure is essential in programming especially when you need to create an instance of an object with multiple properties. For the purpose of illustrating clearly how essential data structures are, we will use the syntax of the C programming language to discuss it. In C, you can create a variable with a user-defined type that can contain multiple simple data types (int, char, float, etc) using the keyword  struct .

Other complex data structures such as stacks and queues, also referred as Abstract Data Types, are used in different algorithms. Stack is one of the basic data structures used widely from simple applications up to intricate ones. It follows the  last in, first out  manner. A simple program that converts binary string to its equivalent decimal number can be implemented by using a stack. First, push every digit of the binary number to the stack from left to right. The top of the stack will be occupied by the ones digit. Next, pop the top of stack and compute its decimal equivalent in ones digit. Then, pop the current top of stack, compute for its decimal equivalent in tens digit and so on until the stack becomes empty. Sum all the computed values and that will be the final decimal equivalent of the input binary string. In the other hand, another data structure called queue follows the  first in, first out  manner and has proven to be useful in many applications.

Complex data structures are also used in searching algorithms. Depth-first search (DFS) algorithm uses stack while Breadth-first search (BFS) uses queue. These two searching algorithms are used in route finding or computing the minimum cost route from the current node to a target node. They are also used in algorithm for solving puzzles. One classic example is the Eight Puzzle where the possible moves are determined using BFS or DFS.

In general, complex data structure is needed since most simple data types are insufficient to create a fully functional program or implement algorithms in a shorter time.

Array is one of the simplest data structure. It is a group of objects of the same data type that can be accessed by indexing. It can be one-dimensional or multi-dimensional. It can also be categorized as either be static or dynamic. A static array has fixed memory allocation while a dynamic array allows you to keep the size of the array unspecified in the declaration and specify it during the run time. Computations and processes on array usually used loops. Array can be used in different sorting algorithms such as bubble sort, selection sort, merge sort, insertion sort, quick sort, etc. and searching algorithms such as linear search and binary search.

The most obvious reason why array is very convenient to use is that it simplifies programming. It makes a program code shorter and neat. Let us say we have a program that computes the average grade of 100 students. The program asks the user to input the grades of 100 students. This can be implemented with or without the use of array. To implement this without array, 100 different variables must be declared. The program will prompt the user to enter grades 100 times manually. The code has 100  scanf()  tags(assuming the programming language used is C) for asking the user for input. Adding the variables is tedious for the programmer since he has to write all the 100 variables to sum them all. The result of not using arrays is a very long and messy code. It is appropriate and practical to use an array in this type of problem. With array, you can just simply declare one variable to store the 100 inputs from the user, e.g. array100. Using loop, it is possible to prompt user 100 times using only a single line of code. The same goes when adding the values of the inputs. The programmer need not to write all the variables used to store the data. He or she will have to use loop and add the variables while the index increments.

Modular design is an approach that breaks down a problem into smaller parts or modules which are designed individually. This mechanism gives many benefits for the programmers. Since application programs, in reality, become bigger in size and more complex, operation and processes in the program must be properly managed to avoid messy codes and lousy implementation of algorithms.

The benefits of modular design can be seen in projects done by team. It coordinates the work of many people and manages the interdependencies between those works or modules. An application may have several functionalities, possibly related to each other. It would be better to use modules to independently create those functionalities. In this manner, if one of the functionalities failed it will not affect the other functionalities since they are designed individually.

Another benefit would be a clearer and organized code. Since modular design allows you to divide your code according to its functionalities, you can easily determine in which part of the code you put the functionalities.

Modular programming enhances the readability of your code. With a more readable code, a programmer can easily fix the bugs within the program.

With modules, developers can easily maintain the program as it gives more flexibility in maintenance and enhance it. Since the program code is easy to understand, maintenance will not be a problem. If there is a need to change the system, it will be done easily since the affected modules only will be modified. Enhancing the program may be done by adding modules for additional features or functionalities.

Generally, modular design helps programmers to build robust application. These were basically made to handle a program s intricacy. Applications that process real-world problems are very complex in structure. With such complexity, there s really a need to practice modular programming in order to resolve such.

The Social Impact of Cellular Phones

Information and communication technology (ICT) is an important aspect of the American society, because it improves and revolutionize the easy flow of information and provides accessible form of communication for the people. The most important and most visible component of ICT in the country is the cellular phone, as many Americans rely on such technology to communicate and connect with other people from different places across the country and all over the globe. Several companies have manufactured cellular phone products with different functions but have the same purpose of making communication faster, which is a sign of good and sturdy innovation. Socially, cellular phone matters a lot, as it enables the American people to connect and communicate with each other, regardless of time and space differences. The present study will focus on identifying the social impact made by the cellular phones from 1990s to present in terms of dependency on cellular phones, impact on geographic and time differences, changes in costs for sending SMS and making calls, reasons for using the cellular phones and basis for having a cellular phone among Americans aged 15 to 40 years old. Communication patterns will be measured based on the answers of American students and professionals.

The topic is chosen based on the significance placed on information and communication technology in the country. More specifically, on the pattern of society s use of cellular phones such as that of dependency on cellular phones, impact on geographic and time differences, changes in costs for sending SMS and making calls, reasons for using the cellular phones and basis for having a cellular phone. The researcher would like to know whether people from 15 to 40 years old perceive a social impact with their communication patterns and to what degree. In addition to this, the differences and similarities brought about by age and generation will be pointed out. The researcher expects that some of the variables such as reasons for buying a cellphone, sending SMS, making calls, dependency on cellular phones, and impact on geographic and time differences can vary depending on the age and generation to which the respondents belong to.  Go beyond the statement and get to what this paper is going to be about, with specific attention to the critical thesis.
Abstract
Genetic engineering can potentially bring about the rebirth of dinosaurs on Earth, with progresses in technology this can potentially become a reality within several decades time. There are numerous improbabilities to this ranging from a lack of potential sources of dinosaur DNA, lack of the necessary processes needed to effectively clone creatures to a general lack of knowledge as to how a dinosaurs genetic makeup is suppose to look like. Even with all of these improbabilities there is always the potential for the technology to catch up, for the science to improve and for people to have the will to recreate something which was lost to time.

Hypothesis
Dinosaurs can be gene engineered using preserved samples of their DNA which can be obtained from amber deposits containing mosquitoes that lived at around the same time as the dinosaurs and fed on their blood.

Introduction
Dinosaurs lived 65 million years ago and were theoretically wiped out when a dense meteor several miles long crashed into the Earth all we have left of them are fragments in the fossil records as well as the bits and pieces of information we have accumulated through the years with no current members of the species around it is considered impossible to bring them back through normal methods however through genetic engineering there might be away to use those fragments that we have collected and to utilize the resources we have at our disposal to recreate a lost species(Imoor 2005). While this may sound good on paper being able to actually do it is another thing entirely. There hasnt been any recent discoveries to date that has actually unearthed a perfectly preserved dinosaur specimen, whether in the wastes of Antarctica, Siberia, Alaska or even in the farthest regions of the Russia. Why the need for a perfectly preserved specimen DNA for one thing is a crucial factor in being able to recreate a los species. With the specimens we have right now namely fossils, bones and footprints there has been no effective specimen containing dinosaur DNA that can be used, examined and possibly recreated in the lab. It is from this point on that we venture from science fact to science fiction.

A lot of people may remember the movie Jurassic park wherein dinosaurs were created using the preserved blood of dinosaurs found in the perfectly preserved bodies of mosquitoes found encased in amber which is a plant resin that can trap insects, harden and keep them nearly perfectly preserved for millions of years as is the case in the movie and in real life as well(Geolor 2008). There are actually amber deposits scatter throughout the world with insects trapped in them that can date back to the time of the dinosaurs. Going along this train of though one would come to the conclusion after watching the movie Jurassic Park that there is a good possibility then that what was shown in the movie might hold true as well in real life(Science Daily 1998). It is true that the possibility is there that if a mosquito at that time was able to feed on a dinosaurs blood and was trapped in amber for millions of years the blood inside of it could possibly still be viable for examination as to determine the genetic sequence for dinosaurs. Such an occurrence though remote is possible. The technology though to be able to utilize the information that could be attained from the blood though hasnt been invented yet. With all the current advances in technology at the present there is still no technology or method currently devised that can effectively cause a single cell which supposedly possess all the information needed to produce the entire body of an organism to divide and create the organism itself(Thinkquest 2010).

Going back to the realm of science fiction in the movie frog DNA was supposedly used to make up for the genetic gaps found in the extracted DNA. While it is true that millions of years would deteriorate any DNA sample using frog DNA as a substitute for any missing strands in reality seems highly unlikely due to the large difference in species a better method would probably be to use the direct descendants of dinosaurs namely todays birds. Using avian DNA it just might be in the realm of possibility to recreate ancient strands of dinosaur DNA however this is purely theoretical(Imoor 2005).

All in all there are factors which show that it is both within the realm of possibility and that it isnt for dinosaurs to walk the Earth once more the problem is in determining which factor outweighs the other more and which seems to be a more logical path to take in regard to further study of the matter.

Hypothesis Testing  Strength Association

Improbability of gene engineering dinosaurs
Before starting on the probability of gene engineering dinosaurs there is a need to start off with the improbability of accomplishing such as feat. The reasons why gene engineering dinosaurs is highly improbable at this point in time can be summarized into 3 factors lack of proper technology, lack of resources and finally lack of knowledge.

Lack of resources refers to basic materials needed to create a dinosaur. While we may have fossils of dinosaurs prominently displayed in museums we lack the fundamental materials needed to bring about their rebirth namely DNA that has not deteriorated over time, is intact, and has not been mixed with that of another species. Unfortunately since dinosaurs lived nearly 65 million years ago there are few sources of actual dinosaur DNA that can be used to potentially reconstruct their genetic makeup and clone them(Tyler 1993).

In the movie Jurassic park a potential source of dinosaur DNA was found the preserved remains of mosquitoes trapped in amber that had been feeding on the blood of dinosaurs. It is true that mosquitoes have been around for millions of years and have remained largely unchanged albeit except for some changes in size over the past million years or so but they were the same blood sucking pests then as they are now(Geolor 2008). It can be assumed that the ancestors of todays mosquitoes fed on the blood of creatures 65 million years ago just as their descendants do so today and since dinosaurs were around that time as well it can be assumed that those mosquitoes during that time period feed on the blood of dinosaurs .

Mosquitoes trapped in amber are nearly perfectly preserved with the contents of their last meal still in their stomachs, going along this train of thought if we follow the example shown in the movie Jurassic Park all that would be needed would be to successfully extract that blood sample from within the preserved remains of that mosquito and there you would have your dinosaur DNA sample.

There are several problems with this particular theory of obtaining dinosaur DNA, first DNA deteriorates over time just like all other forms of biological matter. If so even if there was viable dinosaur blood that could be extracted it would definitely have deteriorated over the millions of years it sat there incased in amber with several gaps appearing in the damaged DNA that would make it difficult if not impossible to reconstruct(Dinobuzz 2010). The second problem would be contamination of the extracted DNA, since it sat there within the belly of that mosquito for several million years its likely that the original blood sample might be contaminated with that of the mosquito not to mention the fact that during extraction some of the mosquitoes DNA might inadvertently get mixed in as well and also it is highly improbable that the mosquito trapped within the amber fed on only one dinosaur, if there are multiple samples of blood mixed together in the stomach of the mosquito a problem would arise in determining from which dinosaur it came from(Tyler 1993). Third and lastly how sure are we that the DNA found in the mosquito is of dinosaur origin at all 65 million years ago there were more than just dinosaurs back then, there were multitudes of other species including mammals. As far as we know the extracted DNA could have come from a dead fish that was on a lakebed that the mosquito fed on.

A lack of proper technology refers to the current status of technology in our world today. While it may be true that as a species we have come far from the time Jurassic park was created currently we still lack the technology to effectively clone a 100 copy of ourselves without using seed material(Think Quest 2010). In the case of dolly the sheep the supposedly 1st artificially cloned animal she was still created using seed material from a live sheep. Another factor to consider is that even if we were able to extract and create an intact dinosaur genome there would still be the problem of having it assemble itself into chromosomes.

Something  which at the present we dont  know how to do with dinosaur DNA(Mc Carney 2010).  One final factor to consider is the fact that even if a successful dinosaur embryo was created how is it suppose to hatch Even with todays modern technology humans cannot be grown from start to finish from a test tube alone, there are a lot of factors that need a living womb in order for something to develop properly(Thinkquest 2010). In the case of dinosaurs it would have to be an egg, we would have to find an egg with a close genetic makeup to the dinosaur DNA we extracted, implant the necessary chromosomes into it to induce fertilization, make it so that it grows in the proper conditions and in the end expect a normal breathing dinosaur to be the result(Mc Carney 2010). This is pretty idealistic since there could be any number of factors that could go wrong that would create an abomination to nature. What is needed is an artificially controlled environment wherein we can monitor and influence the stages of development however such an environment has yet to be created even for the purposes of creating humans(Pallegrino 1995).
Another technological deficiency would be the lack of effective means of DNA extraction. While it is true that dinosaur DNA might be found within the preserved remains of mosquitoes trapped in amber with todays technology trying to extract such precious DNA would be like trying to find out how a watch works by using a sledgehammer to open it up( Dinobuzz 2010).

A lack of knowledge refers to out current knowledge on genetic engineering. Currently the science of genetic engineering is still a fledgling science with much to learn. It has yet to discover how to effectively create a clone through a single cell alone without having to go through the process of fertilization then conception. A single cell supposedly posses all the information needed to produce and entire body however as such science still hasnt figured out a way to effectively make use of this potential to make the cell multiply on its on to create a living breathing body. Another factor to consider is that with the current limits of our understanding of genetics we still arent able to fully decode the secrets behind DNA on how it actually works and how effectively influence it to create a desired result. So long as we lack knowledge in the science which are dependent on to recreate dinosaurs their rebirth will always be nothing more than a silver screen adaptation of science.

With regard to the topic of a lack of knowledge in our ability to genetically engineer dinosaurs another possibility would be to artificially recreate dinosaur DNA from their descendants namely birds. A problem with this theory would be that we have no idea what dinosaur DNA is actually meant to look like or what to put in or what to discard(Pallegrino 1995).

After discussing the improbabilities in the genetic engineering of dinosaurs the next discussion will be on the theoretical probability on the effective engineering of dinosaurs.

Possibility of Engineering Dinosaurs
At this point in time there is no possible way of actually being able to create a dinosaur. That is true however in the future whose to say that it wouldnt be possible
With the way technology is progressing at the present it is only a matter of time until we are able to successfully recreate complex organisms using only a single cell in labs. At the present all we have are samples of dinosaur DNA which could be present in amber incased mosquitoes and the possibility as well of technology catching up sufficiently for scientists to be able to fully utilize it to create dinosaurs(Ridley 2000).
If were are to use the film Jurassic Park as a reference what would be needed is first and foremost a way to extract DNA from a sample. A soon as this is accomplished we would have to be able to make sure that the DNA is in viable condition. After millions of years incased in amber it is unlikely that it is in complete pristine condition as such there would be a need to add in any missing parts from the DNA strand however the use of amphibian DNA would be a bad choice the best possible choice would have to be to use ostrich DNA since there are among the closest living relatives to dinosaurs with a similarly sized egg. Afterwards if all the gaps have been filled in we would need the DNA to create chromosomes and bond with an appropriate host egg in order to create the conditions to bring back a dinosaur. Since we used ostrich DNA using their eggs would be a good choice as well since they seem to be about the appropriate size(Ridley 2000). After inducing the egg to accept the chromosomes and placing it in the right conditions to hatch we will wait and see exactly what will happen to the egg. If it hatches the experiment was a success,

What was just stated was an oversimplification of the process of what could be used to create a dinosaur. While omitting several important scientific factors it does give a general idea of what could be done to recreate dinosaurs through genetic engineering. All things considering the possibility is still highly unlikely anytime soon without the proper technologies available to recreate the genetic code necessary however if there are people willing to try and give an effort it might just become a reality. The question is though is it actually a good thing to bring back dinosaurs in the first place or will it result in a disaster just like what happened in the film Jurassic Park

Ethical Reasoning
If it was within the realm scientific capability to recreate dinosaurs one question must be asked From an ethical standpoint is it ethical to bring back a species long since extinct that we had no hand in their extinction
Dinosaurs died off several million years ago as a consequence of a meteor that struck the Earth and killed off a vast majority of the species around at that time. While it may be true that dinosaurs were killed off when they were at their peak and their possible potential was wasted their deaths ushered in the age of mammals which helped to shape our current world. By bring back a lost species there could be possible unforeseen consequences to those actions. For example if hypothetically speaking scientists were able to bring genetically engineer a brontosaurus what would it eat (Soja 2000). The diet of dinosaurs from several million years ago is drastically different from what we have in the present. Not only that the viruses and bacterium that are present today are also different from what around millions of years ago. The species of today may have been able to develop a tolerance for it however for a creature that was bred for and environment that is long since gone would have to cope with these changes all at once which might actually cause its death(Soja 2000).

There are also moral aspects why should we as a species bring back another species that we had no hand in destroying If it were species that had disappeared due to our actions then if we had the capability of bringing them back then by all means we should however dinosaurs had their time and then they died off. Bringing back a lost species may seem like a good idea but the resources that would have to be devoted not to mention the maintenance that would be needed does not seem quite worth it. While we are fascinated by dinosaurs as a species that once roamed the Earth bringing them back just to satisfy our whims of seeing real live dinosaurs does not seem morally correct at all. Consider this we would be bringing back dinosaurs only to put them into enclosures and watch them for the sake of our own amusement. There is nothing humanity can gain from them in a material sense (ex. Cures, source of food etc) except a better understanding of how they behave as a species and even then it would have been due to tampering on our part so we really wouldnt be certain if what we created and how they acted is really the way dinosaurs would really act in the wild if they were still alive. They had their chance at life and then they died off maybe just maybe it would be best to leave things as they are.

My hypothesis that dinosaurs can be gene engineered using preserved samples of their DNA which could be obtained from amber deposits containing mosquitoes that lived at around the same time as the dinosaurs and fed on their blood cannot be currently proven due to factors that I mentioned namely we lack the current resources in the form of contaminant free actual dinosaur DNA, that the current of technology is insufficient to fully recreate a dinosaurs DNA yet alone successfully clone a specimen and that with the current level of knowledge that we possess we lack the understanding to know the processes needed to successfully recreate a dinosaur.

After going over the facts we can conclude that with todays current level of technology and with the samples that we currently have available it is impossible to genetically engineer dinosaurs. What is needed is a greater focus by the scientific community, more viable samples that can be worked with or maybe better methods of cellular extraction and  finally for technology in genetic engineering to catch up in order to attempt to see if it is possible to bring back a lost species with samples that are several million years old.

With factors for and against the creation of dinosaurs it all comes down to the willingness of human kind to pursue research into the subject. While it may be true that now we may consider the creation of dinosaurs as something which is merely theoretical in nature and possibly absurd the same can be said of several decades ago when it was thought an impossibility to fly to the moon. In all likelihood it is possible that someday, somewhere, someone will be able to bring dinosaurs back let us just hope that the study is tempered by morality and not the same thought process that initially created the disaster that became the final scenes in Jurassic Park.

A Step-by-Step Description of the Activities of the Nervous System While Stepping Up to Reach an Object on a High Shelf

Part One
The nervous system serves as the control and communications center of the body. Suppose that a person would like to reach an object from a high shelf. Upon seeing the object from above, the sensory cells in the eyes respond. Nerve impulses are carried by the sensory nerves towards the brain. This is received by the forward portion of the frontal lobe which then sends a command to Area 6 which decides which part of the body to move. Area 6 delivers the command to the primary motor cortex which initiates the movement by sending the signal or the message by the help of the neurons in the spinal cord down to the nerves of the legs and feet to move. This decision to reach an object from a high shelf is triggered by an increasing electrical activity in the frontal region of the primary motor cortex. The neurons send signals to the motor cortex to activate the necessary muscle. With the help of the information provided by the visual cortex, the motor cortex determines the ideal way to reach the object on the high shelf. The motor cortex, in return, signals the central grey nuclei and the cerebellum to help coordinate the muscles in sequence.

The motor association cortex, on the other hand, handles the more complex movement which is the necessary amount of hand pressure needed to make sure that the object is not shattered or dropped. Finally, the axons of the neurons of the primary motor cortex move down to the spinal cord to relay the information to the motor neurons. Motor neurons are directly connected to the muscles which activate contraction. The muscle of the feet, legs, arms and hands will then contract to be able to reach the object from the high shelf. During the contraction of the muscles of the legs and the arms, the myosin makes the think filaments hook on to the think filaments towards the center of each sacromere. And as each of the thin filaments slide over the thick ones, the I-bands and the H-zones become narrower  until both disappear at its full contraction.

Part Two
The gluteal muscles or gluteus maximus are used for locomotion such as climbing stairs or stepping up to go to higher ground. It works as an extensor and rotator of the hip joint. When stepping up, bones involved are the patella or kneecap, femur or thighbone and fibula or the rear calfbone.

On the other hand, when reaching for an object, the humerus or the upper arm bone as well as radius and ulna. The elbow joint which connects the humerus with the radius and ulna is also involved.

Website proposal

Buying and selling of cars has been a booming business in the market in todays society, people are tired of moving up and down the country in the yards to look for cars of their choice. Hence a car website (project) that will enhance the process of buying and selling at the comfort of your home or office or any other place will be the best thing that can ever happen to the citizens. First, sellers are supposed to post photographs of their cars with different dimensions (front, back, right and left side, engine and interior) with full details of the car e.g. car make, car model, transmission, displacement, price, year of registration, interior and exterior color, type of fuel, mirage as this will enable the buyers to view the car comfortably and make wise decisions before getting into the transaction.  Buyers will not need to make blind payments as they will be dealing with the car owners direct for negotiations since their full contacts (phone numbers and email addresses) are already displayed adjacent to the car, this means there will be no cases of fraud. Once cars have been posted, they will not be published (visible to the buyers) until payment is done, hence there is the admin section for publishing cars and only the website administrator (owner of the business) has the website pass codes.

It will be necessary to have the knowledge of information technology this means the buyers and sellers must be able to use computers and internet, use of digital cameras will also be important because of uploading photographs in the system, anyone willing to own a car will be able to access this website but documents of ownership like log books, identity cards or passports must be presented to avoid cases of theft and fraud as well. The home page will have a number of selected photos (say ten) with the vehicles which will have the flashing feature, the images will be on for 10 seconds each and on and on and on, this means different cars will be displayed systematically, but there will also be a link of sell a car where all cars are to be found.

RESEARCH PROPOSAL

Recently the full genome of a Streptomyces (S. Seasidensis) was fully sequenced by Stenz (Stenz et al 2009). The (linear) chromosome is 6,667,507 b. p. long with a GC-content of 73.1 and is predicted to contain approximately 6,825 protein encoding genes.

The Streptomyces are gram positive bacteria, with high GC content. They are found in soil and are responsible for the production of most of the commercially available antimicrobial, antifungal and immunosuppressant substances. Thus, they are very suitable hosts for the expression and secretion of eukaryotic gene products.

The discovery in the meantime of the novel antibiotic Brightonomycin named after Brighton et al,2008 has brought new hope for the treatment of MRSA infections, which are causing a wide concern in hospital settings worldwide. Brighton and coworkers have described Brightonomycin, as a natural occurring polyketide, produced by strains of Streptomyces seasidensis in vitro. This natural antimicrobial belongs in the Macrolide family and is similar to Erythromycin in that is it is a 14-membered lactone ring with ten asymmetric centers and two sugars, bearing a L- mycarose in place of erythromycins  L-cladinose. (Mycarose is an 2,6-didesoxy-3-C-methyl-L-ribohexose, while cladinose is a 3-methyl ether.)

Brightonomycin is produced in the stationary phase of Streptomyces cycle of life. Attempts to identify metabolic pathways for the action and synthesis of Brightonomycin are now being carried out nation and worldwide but results have not yet been published. Brighton and colleagues however, have managed to demonstrate its bactericidal activity against strains of Methicillin resistant Staphylococcus Aureus, esp. strains EMRSA15 and EMRSA16, that are resistant to both Erythromycin and Ciprofloxacin.

AIM The aim of this project will be to identify the genes responsible for Brightonomycin production, a novel antibiotic that is potent against Methicillin Resistant Staphylococcus Aureus, in Streptomyces seasidensis. The purpose of the responsible gene identification will be to promote the knowledge on the location and function of the gene(s), the characterization and study of the referring protein families and metabolic pathways required for the production of Brightonomycin .The final goal of this project will be the production, isolation and testing of a suitable strain of S.seasidensis with increased Brightonomycin production for use in the pharmaceutical industry, as a novel antimicrobial against MRSA infections.

MATERIALS AND METHODS

Strains of S.seasidensis that will be used in this study have been isolated from the Laboratory for Genomic Research, (previously published work).

Instruments that are going to be used in this project will come from the variety of our state of the art genomic laboratory. Resources of the laboratory include
BioAnalyzer (Agilent) Lab-on-a-chip platform designed to provide improved accuracy and reproducibility analysis of DNA, RNA, proteins and cells.

NanoDrop Spectrophotometer for highly accurate analyses of 1 micro liter samples of nucleic acids.
Sequence Detection System (SDS) TaqMan 7700 and TaqMan 7500 (Applied Biosystems) 96-wells plate real-time PCR instruments.

StepOne (Applied Biosystems) 48-wells real-time PCR instrument.
Microlab Duo (Hamilton) Pipetting robot.
Clondiag ATR 01 Array Tube Reader.
Agilent G2539 High resolution micro array scanner.
ABI 3130xl Automated 16 capillaries DNA sequencer.
Illumina-Solexa Genome Analyzer II high-throughput sequencer.
Hewlett-Packard  Agilent 1100 UV HPLC system.

MOLECULAR GENOMIC AND PROTEOMIC INVESTIGATIONS

Until now, 3 different strains of S.seasidensis with different Brightonomycin production capabilities have been isolated in our department
Strain A-119 (parent strain) is the strain sequenced from the genomic mapping project and has been initially isolated from a mountain soil region in Chuxiong, Yunnan, China.
Strain B-22 (100-fold production of brightonomycin) and
Strain C-43L (mutant strain- no production of brightonomycin), both have been the result of high through-put gene trapping efforts by previous research in our department.

Production of the strains In more detail, we have previously used an electroporation experiment of a Tn5-derived transposon-transposase enzyme complex into S.seasidensis, which resulted in the creation of random insertion mutant strains.These were screened phenotypically to identify the strain(s) that exhibited altered brightonomycin production (from none to excessive). Strain C-43L exhibited no production of brightonomycin that was demonstrated by lack of the drug detection on silica gel chromatography and HPLC. Strain B22 produced 100-fold more brightonomycin. The full method for brightonomycin isolation and detection is discussed in the next section in detail.

Isolation of a multi-copy plasmid (PXR-B22), responsible for the over stimulation of brightonomycin production Using the phage lambda Red recombination system, we have created multiple multi-copy plasmids from strain-B22, which has the ability to produce 100-fold more brightonomycin compared to the parent strain. Further electroporation on parent strains (A-119), has led to the correct identification of a multi-copy plasmid (PXR-B22) containing a DNA fragment from the S.seasidensis genome that stimulates the over production of brightonomycin, as demonstrated by HPLC detection.

PROCEDURES FOR GENE IDENTIFICATION

The instruments employed in this project, i.e. the brightonomycin production gene identification, are all described in the methods section. All experiments will be carried in enriched precursor media for antibiotic production (acetate, glucose). In detail, experiments will include the following procedures

1) Identification and description of the genomic DNA inside the multi copy plasmid, by extraction and amplification-sequencing using PCR (Taqman SDS). By excluding all essential for replication material that exist in large quantities in multi copy versus single copy plasmids, we aim to describe base by base the gDNA inside the isolated plasmid PXR-B22. Comparison in large databases of genes (i.e.BLAST) and plasmid libraries will help us identify the purpose of the plasmid gene. The full genome map of S.seasidensis is going to prove useful for the location of the above gene in the genetic material of the Streptomyces.

2) Detailed search for the presence of the multi-copy plasmid gDNA in our 3 S.seasidensis strains. This will be facilitated by PCR-sequencing, by first creating a primer from the original plasmids gDNA. Detection of its presence is an essential step in determining and establishing the exact gene location in the B-22 strain. Also, in the parent strain (a-119) it is anticipated to have the same location and function, while in strain C-43L we expect not to locate any of the non-essential for plasmid replication gene material.

3) Electroporation of the plasmid inside all three available strains and recording of the occurring mutations and phenotypes

B-22 strainParent strainC-43L strainPlasmidPhenotype (Brightonomycin production)100-fold1-foldNone-( suppression, induction, further induction, nothing)

4). This identification will be done using a comparative micro array scanning procedure on the 3 strains- targeted at the regions missing in the mutant C-43L. This will hopefully lead to the identification of candidate genes for the production of the antibiotic brightonomycin. In order to prove such a relationship, we will have to perform multiple confirmation analyses (mutations by homologous recombination of alleles) followed by phenotypical measurements and HPLC assays. In comparison and simultaneous analysis of the strains micro array results and the genome of the Streptomyces and predicted protein products (see genome project protein families), one has the possibility to identify the cluster of genes for production of the drug itself, the promoter gene location or different regulatory proteins genes (for example global regulatory proteins and small regulatory molecules e.g. AfsA  a diffusible signaling molecule) which are all possible targets in the observed over stimulation on production

5) Comparison of the best candidate genes with the plasmid genomic DNA to determine which is its function (i.e. gene promoteractivator activator for regulatory protein).This will aid in the establishment of whether this particular plasmid is sufficient for antibiotic production in large scales. It is possible that micro arrays and the step by step comparison to the genome, will identify novel promoter areas, capable of being copied in even swifter to use low-copy plasmids, for the activation of antibiotic synthesis in 1000-fold or even more quantities.

Theoretical knowledge exists from the fascinating research currently being conducted in Streptomyces sp. to allow for high copy number plasmids to incur the over stimulation of Polyketide production (Fong et al 2007). Hence we believe that although time consuming, this project has the potential to achieve its goal which is the characterization of the genetic mechanisms behind the production of brightonomycin and proposed means for its overproduction. Further studies (protein specific analysis) will be required to study the biological and metabolic pathways that compliment this effort.

Comparison of Internet scripting languages

Programming languages have undergone a lot of revolutions especially with the introduction of the Internet. The Internet has grown because of the complexity that has been added to the tools that are used to build. The presence of scripting languages has been so useful in advancing the features of the Internet. It is now possible to have secure data being transmitted over the Internet, thanks to scripting languages which are increasing in complexity every day. The sections that follow give brief description of three Internet programming languages which are PHP, ColdFusion and ASP.

PHP
This is an Internet programming language which was developed first in 1994 by Rasmus Lerdorf. The PHP that is in use today is very much different with the initial version that was released then. The initial development included a series of activities which include a posting to Usenet newsgroup, comp.infosystems.www.authoring.cgi. This posting was made in June 1995 by Rasmus himself. The posting was all about the introduction of Personal Home Page Tools (PHP Tools). This was the name that was given to PHP initially. The tools were supposed to be of a tight cgi binaries that were coded in the C programming language. The functions of these binaries included the following
Being able to log into homepages by use of private log files
Being able to view real-time log information.
Providing a good interface for the log information.
Being able to access daily access counters so that could get the log files regularly.
Eradicating the access of users by basing on their domain.
Enabling people to protect their pages by use of passwords on users domains.
Being able to track the access of a user basing on that users e-mail addresses.
Being able to perform server-side includes without the need of having the server supporting it.
Disabling the ability of someone logging in to certain domains.
Enabling one to easily create and display forms.

From the list that is shown above, it is an indication of the concern that people had at first which included being able to protect pages by us of passwords, ease of creating forms which were to be used in pooling information from a data store like a database, and being able to access data in a form in subsequent pages. This list also illustrates to us clearly that PHP was used as framework for many useful tools.

The list only talks about the tools that come along with PHP but behind the scenes, the main goal of developing PHP was to develop a framework which could make it easy to extend PHP and addition of adding more tools. The coding of all these tools was in C language. The development was developed in such a way that a simple parser could pick the tags from HTML (Hypertext Markup Language), a web programming language, and called the various functions in C. The initial plan was never to create a scripting language. But to this day, PHP has been the moist important scripting language and it has changed the face of the Internet.

What happened afterwards is that Rasmus started working on a large project at the University of Toronto that was meant to pull large amount of data from various places and in the end be able to present an administration interface which was web-based in nature. PHP was the ideal language that could be able to perform this task but small other functionalities were added so that performance could be added. The tools had to be brought together and integrated into the web server.

Rasmus made some hacks to the NCSA web server, so that he could invoke them to use the core PHP functionality. The challenge with this was that someone had to replace their web server software with the hacked version of the NCSA software. The advantage that Rasmus had is that at this time Apache had started picking up and it made it pretty easier to add functionalities like PHP to the server.

In April 1996, some other features were added to PHP which included the following
PHP was developed to be a server-side scripting language that was embedded to HTML language. It had features for limiting access to logs which were in-built to PHP. It also had some features that could support mSQL queries and Postgress95 backend databases. This resulted in getting the fastest and the most effective tool for developing database-driven web sites.

It was also developed so that it could be able to work with any UNIX-based web server on UNIX platform. It was to be free of charge to all users even including commercial users it still is free to this day.
It enables access logging where one could be able to record and log every visit to your pages either a database management or in mSQL database. Having this log information makes analysis to be easier.
It also enabled access restrictions where passwords were used to protect access to protect pages. Restrictions were also based on the URLs being referred and many other parameters that were could be deemed necessary.

It could also embed mSQL queries to HTML source files. Writing conditional and while loops was also made possible with the improvements that were added to PHP. The looping support with the use of PHPFI that was added to it made writing of loops to PHP much easier.  The addition of the advanced programming features like array, variables, and associative arrays. There was also the addition of user-defined functions with variables that were static. Recursion was also added to it. There was also the introduction of the extended regular expressions which led to the emergence of powerful string manipulation support through full regexp support.

There was also the addition of HTTP Header control which enables one to send headers which are customized to the browser which can be used for advanced features like cookies. Cookies are the information that is stored by the browser that contains the history of the browsing. The cookies have been the source of computer attacks and phishing that is currently invading the cyberspace.

With this new version of PHP, there was also the ability to create GIF images with the support of tags which are easy to use.

It was in this new release of PHP that the term scripting was first used. The first release of the PHP, PHP 1, had a vague tag replacement code. This simplicity was eradicated with the release of PHP 2 which employed the use of a parser that could handle a more sophisticated embedded tag language. When this compared to todays standards, it was not sophisticated as such but compared to PHP 1 it was.

The main reason for developing PHP 2 was because many people were more interested in embedding logic direct to their web pages for creating conditional HTML, custom tags, and other features. Most of the people who used PHP 1 actually continued using C-based frameworks for creating add-ons. Most of the users of the first PHP were asking for the ability to be able to add the footer that could track hits or be allowed to send different HTML codes conditionally. This desire led the developers of PHP to come up with the PHP if tag. The introduction of the if tag led to the desire to have else tag likewise. From here, it was now the point of writing entire scripting language.

In June 1997 the growth of PHP had been so enormous that it attracted a lot of users. Despite this popularity, there were still some problems with the parsing engine. It was still manned by one person with a few contributions from here and there. It was not until Zeev Suraski and Andi Gutmans in Tel Aviv volunteered to rewrite the whole parsing engine that PHP version 3 was released. The parsing engine was rewritten. There were also other contributors who volunteered to work on PHP it was now an open-source project and no longer a one-man effort.

The new version of PHP included the support for all major operating systems like Windows 95NT, most versions of UNIX, and Macintosh. Most web servers were also supported which included Apache, Netscape servers, website Pro, and Microsoft Internet Information server. It could also support a wide range of databases like MySQL, mSQL, and Postgress and ODBC data sources.

These new features also include the ability to have persistent database connections and support for SNMP and IMAp protocols.

From this point, PHP picked up quickly and people started contributing aggressively for the success of it. There was the development of features that enhanced security. The developers and enthusiasts developed an abstraction layer between language and the web server which added a security mechanism to the usage of the language.

The diagram shown below shows the growth of this language over the recent years.

The figure above shows the growth of IP addresses. The usage of unique IP addresses that uses PHP with Apache is shown in the diagram above.

The growth of PHP domains is also captured in the diagram below where it is clear that PHP is growing in leaps and bounds in the recent years.

The figure above shows the number of actual domains that report they are using the PHP module. 36,458,394 domains were found to be in use in November, 2001 and out of this 7,095,691 had PHP enabled. This indicates fewer than 20  PHP usage.

ColdFusion
This is a programming language that is used for commercial rapid development of applications. It is the initial invention of Jeremy and JJ Allaire in the year 1995.  The motivation behind the development of this software is to make it easier to connect simple HTML pages to a database. It was not until version 2.0 that ColdFusion became a full platform that had an integrated development environment (IDE) and became a full scripting language. This language is currently sold buy the Adobe Company and has the advanced features for enterprise integration and development of rich internet applications. ColdFusion competes with tow scripting languages which are PHP and ASP (Stopford, 2005).

ColdFusion is popular in the development of websites that are database-driven. It can also be used to generate remote services such as SOAP web servicers or flash remoting. This scripting language is well-suited as the server-side technology for the client-side Flex. It has also been developed so that it can handle SMS services and instant messaging through its gateway interface this feature is only available in ColdFusion MX 7 Enterprise Edition (Hall,  Brown, 2007).

Features of ColdFusion
One of the most abhorred features of ColdFusion is its scripting language that is associated with it, ColdFusion Markup Language (CFML). This scripting language compares so well with the scripting features of ASP, PHP, and JSP but the syntax resembles that of HTML. ColdFusion is often used with CFML but for it to work effectively there are additional CFML application servers apart from ColdFusion. It supports other programming languages apart from CFML such as server-side ActionScript and scripts which are embedded that can be written in language that resembles JavaScript known as CFScript.

Other features of ColdFusion include simplified access to databases, management of the server and client cache. It also generates codes for the client side especially for use with forms and validation. It also helps in conversion from HTML to PDF and FlashPaper (History of PHP, 2008).

ColdFusion is also used for retrieval of data from common enterprise systems which include Active Directory, LDAP, SMTP, POP, Microsoft Exchange Server and other data formats that are common like RSS and atom. Other features include file indexing and searching mechanisms, administration of the graphical user interface (GUI), server, application, clients, session, and request scopes. Parsing of XML, and XPath, and clustering of the server. It has enhanced functionality whereby it can be run in a .NET environment or image manipulation.

ColdFusions engine was written in the C language and featured a scripting language, modules acting as plug-ins that were written in Java, and has syntax that is very much similar to HTML. Compared to HTML, ColdFusion tags starts with CF and is followed by a name that indicates the tag that is interpreted to, in HTML example is cfoutput used to begin the output of variables or other content.

Apart from these features CFScript have a studio, CFStudio, which provides a design platform that gives a WYSIWYG.

ColdFusion development milestones
In January 1998, version 3.1 of ColdFusion was developed. The important thing about this release is that it brought forth a port to the Sun Solaris operating system. The studio module gained a live page preview and HTML syntax checker (Newton, 2006).

ColdFusion version 4.0 was released in November 1998 and had some features that enabled it to work with .NET frameworks. Version 4.5 was released in November 1999 which had the ability to have invoked Java objects, talk directly to a J2EE server and execute system commands. This was a very important milestone especially given the fact that Java was gaining her popularity as an Internet programming language for mostly programmers.

ColdFusion 5 was the first release from Macromedia after acquisition. This was the first version to be coded for legacy that was meant for a specific platform. ColdFusion version 6 was developed by purely using Java and it was codenamed Neo. This enhanced the portability of ColdFusion and made the security to be tighter this was necessary because it was developed inside the Java Runtime Environment (Wikipedia. ASP.NET tools, 2009). This move was initiated by Damon Cooper who was a senior engineer at Adobe. On January 2001, Allaire announced that there was discussion on the merger with Macromedia Company. From version 7, the naming convention was changed henceforth so that it was now Macromedia ColdFusion MX 7. CFMX 7 added flash-based, and XForm capabilities which enabled it to be used in advanced  web applications so that it could build reports and output Adobe PDF, as well as FlashPaper, RTF and Excel.

ASP
ASP id a framework for web applications that was developed and marketed by Microsoft so that programmers could now be able to build pages which are data-driven, and build web services with much ease. This technology was released in January 2002 with version 1.0 of the .NET framework ASP.NET is the successor of the Microsoft Active Server Pages (ASP) technology. This is built on the Common Language Runtime (CLR), which allow programmers to be in a position to develop ASP.NET code using any language which supports .NET. The ASP.NET SOAP component is an extension which allows ASP.NET to process SOAP messages (.NET comparison chart, 2009).

Milestones of ASP.NET
The release of Microsoft Internet Information Services 4.0 in 1997 led Microsoft to begin looking for alternatives of improving their scripting language, ASP. There had been various complaints about Asp which include the fact that there was a challenge in separation of presentation and content and the ability to write clean code. Anders Mark was tasked with looking into how the new model would look like. The initial design was developed over the course of two months and was to be developed over the Christmas holidays of 1997.

The initial model was called XSP because every technology that was being developed seemed to be having the name starting with X for example XML, XSLT everything good seemed to start with X. this is an explanation by Guthrie who is among the team who was tasked with the development of this technology. This platform was developed using Java language. It was later decided that the it be developed on new platform on top of Common Language Runtime (CLR) because it provided an environment that supports object-oriented programming, garbage collections and other features that were good features for Microsofts Component Object Model platform. They were not supported initially.

Since this new technology could now support CLR, I was now rewritten in C but only hidden from the public. It was renamed to ASP as by this point, the platform was seen as being the successor of ASP and the main reason behind this was to provide easier migration for developers using ASP (Samaru, 2009).

Comparison of the three technologies
FeatureLanguagePHPASPColdFusionSpeed Excellent it has the ability to pump in more pages than any scripting language. It can even go further with the Zend optimizerGoodExcellentFunction listHas several functions to make many Not well developedNot well developedDatabase connectivityWorks well with MySQL databaseIt integrates well with Microsoft databases like MsSQLHas an easy interface for database connectivityPlatform compatibilityVery poor. It was meant for UNIXLinux platforms. It loses some features when used in other platforms.Very poor. Only Only works with windows servers and LinuxError handling capabilityTrycatch errors are not as good as other languagesNot well developedHas a superb error handling capability

Conclusion
With the advancement in Internet applications, it is expected that the software applications that are used in the development of the internet is becoming more complicated.  The languages that are very common in development of the Internet are PHP, ASP, ad ColdFusion. These three languages have been competitor with one other in the usability and the features that they have. It is interesting to know the positioning that these languages will take with the increase in the Internet usage with the advancement of rich internet applications.
The following is a project plan prepared for submittal for Boardman Management Group in response to Service Request SR-bi-001, originally requested by Jeane Witten, on behalf of Boardman Management Group  Baderman Island Resort.

The project plan has been formulated over a period of several weeks of collecting data through research, interviews, conference, survey, and analysis of relevant components of both Boardman Management Group and the Baderman Island Resort computer network in order to better serve the requirements of previously stated service request. Requirements per each service request change have also been evaluated and incorporated into the project plan.

This project plan consists of several parts to include a Statement of Purpose, business requirement, presentation of alternatives, recommendations, projected economic model (cost analysis, return on investment, savings), risk assessment, schedule, and a maintenance summary all formulated in order to execute the successful implementation of Microsoft Word 2003 as the standard word processing application of the Baderman Island Resort computer network. Along with this formal project plan, a Microsoft Power Point presentation has been composed as a summarized format to be presented to, and documented for later viewing by Boardman Management Group.

All questions and concerns may be directed to the designated point-of-contact for Boardman Management Group, Jeane Witten.

Statement of Purpose
Currently, there are 3 different versions of Microsoft Word (Microsoft Word 2000, Microsoft Word 97, Microsoft Word XP) are being used among the 70 local machines which exist within the Baderman Island Resort computer network. These 3 versions of the Microsoft Word application have been determined to be inadequate by Boardman Management Group, and it has been found necessary to implement a more up-to-date and uniform word processing application to span across all local machines on the Baderman Island Resort computer network.

The purpose of this project plan is to document and outline the structure of implementing Microsoft Word 2003 as the new standard word processing application within the Baderman Island Resort computer network. The desired result should include a uniform system of local machines running the same up-to-date word processing application. This result should also include establishing a system of support and maintenance for the newly implemented application at Baderman Island Resort, once implementation has ceased, along with a successful return on investment for the entity, Boardman Management Group.

Business Requirement
The business requirement for implementing this word processing application bas been based upon several factors. The first factor taken into consideration of implementing the outlined project is Service Request SR-bi-001, followed by its Change Requests per the Boardman Management Group point-of-contact, Jeane Witten.

The service requests calls for a more up-to-date word processing application. Of the 3 versions of Microsoft Word, which are currently running among the 70 local machines to be upgraded, the latest version was released in 2001. Based on Change Request 1 of Service Request SR-bi-001 for a newer version of the Microsoft Word processor, it bas been determined that the next latest version of this processor, Microsoft Word 2003 has been selected as the adequate candidate for implementation of this project.

Because of the level of maturity of the Microsoft Word 2003 processing application, there exists a substantial amount of support for the system, also contributing to its selection for implementation of the project. Both support and maintenance for this newer application has been found to be critical to the success of the project once implementation has ceased. The selection of Microsoft Word 2003 as the candidate for this project was also based upon cost and functionality.

Another requirement the project calls for uniformity among all the local machines within the Baderman Island Resort computer network. Currently, among 70 local machines, 26 machines are running Microsoft Word XP, 38 machines are running Microsoft Word 2000, and 6 machines are running Microsoft Word 97. To ensure this uniformity among all 70 local machines, Microsoft Word 2003 will be installed onto all machines. This will include both hardware and software updates to machines as needed, for those machines found which do not meet system requirements for running Microsoft Word 2003.

Based on collection of data through surveys distributed to users on the computer network, it has been determined that Microsoft Word 2003 meets the requirement of also satisfying the ease of use for users. This will contribute to the successful adoption of, and adaptation to the newly implemented word processing application by all of the users among the Baderman Island Resort computer network. This will also help in better training users to work with the new application, helping to eliminate the margin of error, and decrease dependency on maintenance and support from IT staff.

Alternatives
Currently, there are various open source word processor available in the market that the Baderman Island Resort could choose as an alternative and the most popular of them is the OpenOffice.Org Writer. OpenOffice.Org is a software package suite that is comparable to Microsoft Office only that it is free and open source code. OpenOffice was launched In August 1999 after the Sun Microsystems bought its code base StarOffice from a German company StarDivision and since then Sun had been able to gather a community of developers who want to take part in a noble task that would benefit all computer users. Its being free and open source code is specified in its GNU Lesser General Public License (LGPL) which permits its use, download and redistribution for any purpose. Aside from its Writer, OpenOffice commonly includes Calc for spreadsheet, Impress for presentation, Base for database management, Draw for vector graphics and Math for equations. Its being an open source allowed many programmers to develop several versions of it beside of those that is needed to be installed, one just need to be downloaded to any disk to consider portability, while another one just need to be run in a browser.

The interface of its Writer as well as the other components of OpenOffice is almost the same as those of their counterpart in Microsoft Office 2003 and previous versions. First time users of this program that are users of Microsoft Office could therefore easily cope up with its included functionalities such as those concern in styles and formatting. OpenOffice Writer could also read most font format that Microsoft Word also uses. It also has a capability for mail merge and connection to a database. On top of the capability of the Writer is that though it has a default to use files with .odt extensions, it could still access, edit and save files in .doc format. Compatibility to the Microsoft Office 2007 Word new file format .docx is currently on the OpenOffice makers research development.

But the noticeable problem a first time user would encounter is that as compared to Microsoft Office Word 2003, it is slow on start up. Depending on the nature of the version, its speed would also rely on several factors such as internet connection, hard disk and RAM capacity. But generally with the ongoing development and support for open source programs, programmers will surely notice this issue so those who will try using OpenOffice Writer in the long run would be notified that there is a new and better version available ready to be downloaded and installed. As of now, makers of OpenOffice Writer offers a Quick Starter that would lessen the time of opening the program the next time a user wanted to. Software updates are also released as bug fixes that a programmer might discover or a user might report.

Since OpenOffice Writer is free and open source code software, arguments arise that there is no such official team or persons who would maintain and provide technical support for it. Developers of OpenOffice Writer and all of the other free software are known to be all volunteers who just want to break the monopoly of proprietary software in the market. Since OpenOffice Writer is free and could be use for any purpose, distribution teams arose to provide the necessary support in installation and maintenance for a cost cheaper than those provided by Microsoft Word. It is then up to the Boardman Management Group on how would they assure if their distribution team is competent enough to provide their needs concerned with the OpenOffice Writer product.

For the technical aspect such as security, OpenOffice Writer is claimed to be virus free as of now. But because of its being open source, developers who may or may not deliberately create a code to hack and break inside the files system OpenOffice Writer is residing. Makers of OpenOffice.Org includes a security team that would keep track this vulnerability and would try to be as keen as possible on approving changes in legitimate versions so to avoid bad impressions from the general public end users. It is essential that the ones computer files be protected especially those documents of high importance.

Another issue that was brought about on the use of OpenOffice is its dependency on Java, the programming language that made Sun Microsystems popular. Earlier versions of OpenOffice dont follow the look and feel native to the operating system it was installed in that makes some users need to adopt on changing environments. Newer versions of OpenOffice do now try to resolve this issue by building different bundles of codes for each operating system available in the market but still maintaining interoperability.

As arguments concerning the use of OpenOffice had been tried to be all laid out, there are therefore many benefits that OpenOffice promises. If only they could eliminate some of the noticeable shortcomings in their system, it is a very good choice for Boardman Management Group to try using OpenOffice Writer.

Recommendation
As compared to the OpenOffice, Microsoft Office Word 2003 is a proprietary software but it offers a direct technical and maintenance support to those who want to legally avail them. Microsoft Word 2003 had also been chosen through a survey that it is the most preferred to be used by recipient users. Lastly, in accordance to the Service Request  SR-bi-001 by Jeane Witten, it is thus recommended to install Microsoft Word 2003 as the uniform word processor to all of the 70 local machines which exist in the Baderman Island. All of its hardware and software requirements if it is not yet available in the existing local machines would also be procured. This call for an open bidding on what group would provide the licensed Microsoft Word 2003 to know which one offers the best distribution, maintenance and technical support to the Boardman Management Group in behalf of Boardman Island Resort.

As of current status, three distributors are vying for the upgrade project and these were Distributor A, Distributor B and Distributor C. They had provided their offered package deals with their corresponding level on how it would be distributed and maintained. So to come up with a decision on which distributor should Boardman Management choose I included in the next section the three distributors confidential proposed packages. Other distributors could still submit their offers if they are interested up until February 30, 2010. Implementation of the project is scheduled on March 15 but it still depends on the availability of financing and some other external factors.

Projected Economic Model
Distributor A

Package 1 1000
Microsoft Office 2003 Suite which includes Word, Excel, Powerpoint, Outlook and a three months warranty and technical support for all programs

Package 2 1200
Microsoft Office 2003 Suite which includes Word, Excel, Powerpoint, Outlook, Access, Communicator and a six months warranty and technical support for all programs
Price inclusive of the cost of distribution such as CD, manuals and delivery expenses

Distributor B

Package 1 500
Microsoft Office 2003 Word including three months warranty and technical support

Package 2 700
Microsoft Office 2003 Word with pre installed compatibility pack to open .docx file formats.

Distributor C

Package 1 1000
Microsoft Office 2003 Suite which includes Word, Excel, Powerpoint, Outlook, Communicator and a three months warranty and technical support of all programs

Package 2 1200
Microsoft Office 2003 Suite which includes Word, Excel, Powerpoint, Outlook, Access and a six months warranty and technical support for all programs

If we would analyze the package deals the three distributors are offering to the Boardman Management Group, we could easily notice that Distributor A offer gave such a promising deal that the Management Group could really get many benefit of. Their offers specification including their prices is similar with Distributor C but it was the value added services that made an impact to decide that it is best to select Distributor A. Though distributor B has the lowest price, they are only offering to upgrade Microsoft Office 2003 which is somewhat a bit odd as the Microsoft Company bundles it with their other Office programs. It is normally done that if it is needed to upgrade a part of package but there is an upgrade for the whole package, one should avail of the whole package to avoid compatibility problems. Distributor B might not be legitimate as they offer a lower price in exchange for a not complete service. But for partiality reasons, Baderman Management Group should ask all of the bidders their respective contracts to deliver and distribute Microsoft products.

If we are to choose from the two packages that Distributor A offers, the second package has an edge because of the inclusion of the Microsoft Access which handles e-mails and the Microsoft Communicator for chat. Communicator is a groupware that would even fasten up the process of communication between the 70 Local Machines but it should be noted that only those machines that need to disseminate information should have it because it might be abuse or use for non-work purposes by the employees.

Forest Botany

The article on Seasonal Dynamics of Tree Growth, Physiology and Resin Defenses in a Northern Arizona Ponderosa Pine Forest (Gaylord, et al., 2007, p.1) by Monica L. Gaylord and others is a paper that addresses all aspects of the seasonal dependence of pine ponderosa. The development, maturity and resins of the pine forest as linked to various climatic variations are evaluated and illustrated scientifically in this work on the forest ecology. The required objective is to crucially explore the paper to dig out different facts and procedures adopted by Gaylord with necessary insight to the topic. The supporting articles identified are read in parallel to compare, contrast and strengthen the views and conclusions put forward by Gaylord. Essence of all of those articles is squeezed out to experience the reality sensed through those.

The article intends to establish different facts on rate of photosynthesis on leaves, resin flow and pressure on leaves due to water along with the response in 2 years of pine ponderosa with respect to the climatic disparities. The living things observed in this analytical exploration are pine ponderosa and pine bark beetles. The statistical analysis and the site description have been performed in the course of the compilation of the article to accomplish the set objectives.

The authors of Arizona Pine (Pinus Arizonica) Stand Dynamics Local and Regional Factors in a Fire-Prone Madrean Gallery Forest of Southeast Arizona, USA, (Gaylord, et al., 2007, p.1) affirm that extensive weather prototypes materialize to put forth the administration of the humidity accessibility, fire incidence and forest demography, lift the convincing likelihood of local harmonization of tree-plant dynamics. GL Zausen, the author of the second article, has been able to clearly illustrate various aspects of ponderosa pine physiology and the bark beetle abundance. The article Ponderosa Pine by William W. Oliver and Russell A. Ryker, facilitates a strong background of the impact of climate on ponderosa pine trees. With the solid back up of these four articles, the exploration carried out by Monica L. Gaylord and colleagues is being studied through this paper. The objective of this paper is to investigate and establish the reality and factuality of Gaylords sought out articulates with the supporting evidence from different researchers considered.

Synopsis
The research was conducted to analyze the response of pine ponderosa at extreme conditions. The various physiological aspects regarding the growth and defense of the trees are being identified through precise and efficient methods adopted in the investigation. The survival of a tree in extreme conditions lies in its ability to overcome those without any major loss of its essentials. Gaylord established certain aspects with support of experiments conducted which is backed up by the accomplishments of authors in their papers. The research crucially explores the physiology, the growth and resin defense of pines besides examining the close relationship with climatic dynamics. The helpful documents in fact create a scientifically proved domain to achieve validation to Gaylords views. The potential of pines to survive various extreme weather conditions is evaluated and studied, while answering possible queries regarding those. The precipitation, resin flow, belowground and aboveground growth of pines, temporal variation in needles, radial growth and carbon allocations are the chief biological developments determined through various scientific approaches. The research analysis done by different authors come to closely related conclusions with respect to central article of this investigation. The entire investigation is fulfilled by efficient resolving proposal on the conditions of water pressure in foliage, existing suppositions of carbon provision in plants and the sequential deficiency of pine bark beetle scourge in the northern Arizona.

Gaylord relates the resin development and wood maintenance of the pine ponderosa forest with variations in weather conditions as well as day-night perspectives. Various close relations of the developmental and physiological perspectives of pines with ecological terms have been evaluated under the background of Crater Lake National Park. The paper establishes that constant exposure to different extreme conditions helps pines to overcome the problems linked with these. The factuality on the achievement of resistance to fire because of getting exposed to fires of decreased level in strengths is discovered. The evolution and accretion of wood resources and gloom lenient groups is beig stated explicitly as accumulation of fuels and shade tolerant species (Agee  Perrakis, 2007, p.1) which provides an evident link to forest science.

Andrew M. Barton and his colleagues explain the real life relationship between diverse climatic conditions and the entire aspects of pine groups, which also influence the bark beetles. Barton succeeded in claiming various facts on fire resistance and growth aspects and various hypotheses on these has been established with the proper evidence through scientific modes. He succeeded in establishing various facts on the potential role of regional, synchronizing climate and associated fire patterns. (Barton, Swetnam  Baisan 2001, p.352). The article successfully underlines the fact that tree-plant reaction to local fire and weather conditions allow them to preserve and as a part of ecosystem they are in fact influenced by climatic diversities throughout a year.

GL Zausen essentially establishes that apart from the ecological alterations and climatic conditions, human activities also contribute to the physiology, growth and expansion of pines. Various negative influences caused during the seasonal shifts are found to be reduced with the vitalized research based activities for betterments. The paper is relevant in supporting the analysis in the work of Monica L. Gaylord and her colleagues through the exploration of the assorted linkages to the topic, supported by the proper evidence and experiments. Lower rates of the destruction of pine trees have been observed at the end of their research on the matter. Oleoresin exudation flow was in an increased rte during July of relatively heater year than in cooler. (Zausen, et al., 2005, p.10).

William W. Oliver and Russell A. Ryker successfully illustrate the traits closely linked with the growth, development and other features of pine ponderosa. The habitats, climatic influences and other demography are studied and explained very well through the paper, creating a better foundation for ones knowledge on pine forests and the ecological links. The soil geography and reproductive cum lifecycle examinations have been done in the paper, giving the audience information about almost all facts related to pines and their wealth. The fact that pine ponderosa trees grown extensively in drier sites is related closely to supplies of available soil moisture has been made evident through the research conducted. (Ryker  Oliver, n.d., para.9).
The efficacy and pitfalls in the paper gets completely wiped out which gives an opportunity to evidently state that Gaylord could achieve what was intended prior to the completion of the research procedures. Always a research gets its realistic glow when peer or successor back up is available.

Critique

Introduction
The papers under consideration concern closely related topics. It is an established fact that findings on a certain topic, when investigated by more than one person, are unlikely to be the same. In addition to this, the discussions on topics having variations will obviously not present an analogous result. Gaylord, in fact, has accomplished the objectives through the specific approaches leaving behind the other works done on similar lines. The results of studies conducted by peers will, however, come to help at times. This critique is undertaken to determine whether the works selected for this paper will support directly or indirectly the deduced facts in the major article. Gaylord succeeded in completing the research on ponderosa with simpler but efficient approach to the studies. The experiments on the growth and resin flow in pine ponderosa with efficient back up of site description and DBH identification gave a rugged structure to entire analysis. Through a systematic approach and employment of different scientific tools and techniques in the investigation a sincere attempt has been made to derive best results with least redundancies. The methods and experiments deployed by all other researchers under consideration for exploration are different and but appropriate from the domains of their respective outlooks. Hopefully, the particulars and the data obtained through the statistical approach or any other analytical method can be similar or supporting or else backing up the papers written by these authors. The critique covers all possible areas of the researches and makes specific evaluation of each aspect. The monthly values of features measured were noted and analyzed to see whether there are changes as mentioned in the topic.

Methods
The Northern Arizona University Centennial Forest is the area utilized by the authors of the major articles to investigate the topic. The site description adopted helped in analyzing different situations unveiled with respect to varied occurrence of warmth and coolness. The experiments carried out using tree sampling techniques accompanied by DBH identifications and recurring examination of photosynthesis procedures has ended up with required conclusions and implications. The predawn and midday xylem latency has been a major factor considered to be essential for achieving appropriate observations regarding resin flows and direction of bark development in pine ponderosa. These measured values are later sampled with a specific interval convenient for the purpose. Resin flow has been noted by removing the barks from both sides of the wood at a height 1.3 meters from ground level. The below ground as well as above ground growth analysis were conducted creating a better approach to the experimental aspects of research studies. The fine root production has been evaluated for below the ground growth determination. The needle development, stem radial expansion and the growth of shoots have been evaluated to determine the above the ground growth resolution. The statistical analysis has been done with the utilization of the numerical tool known as analysis of variance (ANOVA) with SAS software package (SAS Institute Inc. Cary, North Carolina) (Gaylord, et al., 2007, p.3). Tabulating observed values to achieve the scientific provisions for the objectives kept for the research. Linear regressions accompanied by the curvilinear approach have been used for measuring the resin flow, developmental and physiological features and atmospheric heat, facilitating the identification of the mean ambient temperature. JMP is software employed for accomplishing linear regressions. Durbin-Watson Tests have been used to correlate the variety of the data obtained. The methods are all based on scientific procedures and foundations, giving out the most efficient and reliable results. These have made the analysis most effective besides rendering its credibility. The entire observations of experiments done as stated above are tabulated and recorded for further evaluation, done on weekly, monthly, yearly and comparative basis through which results are identified.

Results
Photosynthesis was bimodal throughout 2002 during midday with June exhibiting least and October the highest while 2003 witnessed an increase, though the seasonal variation were similar for both the years. July marked lowest in the year 2003. Appreciable difference in photosynthesis has been identified between months of both the years considered. Predawn and midday xylem hydro potential of leaf have been determined to be varied with respect to change in climatic conditions. The highest values for both are seen at a half way of winter season, gradually decreasing toward the end of spring and during the advent of summer. 2002 exhibited the predawn lowest value in May and the highest in September at midday turned to be the lowest in May and highest during December. The figures are identical in the following year as well. A consistent variation between months has been observed in both years. The resin flow is measured for weeks and turned out to be in the range of 11.9 - 3.1 and 0.03 - 0.03 mL in 2002, where highest is credited to June and lowest to January. In 2003, the range got expanded to 17.7 - 3.0 and 0.01 - 0.01 mL, among which July has the highest and January has the lowest values. But, in fact, there has been no considerable variation between the years 2002 and 2003. Below the ground growth indicated the lowest during the period between January to June with a little enhancement in July and maximum average growth in August 2003. This is a better figure, than in 2002, for the root growth also in terms of consistency. The root growth has different rates in various months of the two years. The above the ground growth has been marked between June and July with the maximum level of development during July  August period in 2002. But the time span May  June in 2003 marked the beginning and June  July the highest values. The similarity between the two years is the elongation occurred during December. Linear regressions have helped in allocating carbon between development and resistance. The resin flow is linked to the growing period in a positively correlated manner under the consistent air temperature. The photosynthesis analytical results have been of the least significance under the correlative analysis with the most relevant one as a negative relation with resin flow.

Discussion
The overall investigation shows that there exist considerable variations in the growth, physiology and the resin flow of pine ponderosa. The variations in climate between the two consecutive years cannot be tremendous. But the study indicates a clear difference in the developmental aspects of pines between 2002 and 2003. The below the ground growth credibly displayed a better enhanced appearance with the availability of the soil water content, soil temperature and carbon dioxide. The resin flow is directly linked with the temperature prevailing during the period. But the variation between the years is not evident due to the fact that temperature variations are not considerably different when considered with the average temperature of the two analyzed years. The pressure of water has contributed evidently to the needle growth. The parallel influence of hydraulic pressure on both above the ground as well as below the ground growth is explicitly evident from the experiments. The redundancy created bivariate graphs in the case of the growth and the resin defense relationship. The expected result has not been obtained through this approach. The GDBH which is supposed to be supported through this particular plot has not proved to be completely supportive. The actual requisite is a graph with a bell shape link between the growth and the resin defense. But to the negative result, a clear bell pattern has not been obtained. This is a flaw in the accuracy of the research conducted. According to the GDBH, with a medium level of pressure from water, only resin defense can be enhanced and not growth. This is due to the allocation of carbon to resins rather than for the growth perspectives. But to the maximum extent it has reached up to the GDBH in the case of higher stress form water. As a whole, the research has been successful in achieving the objectives set and also various other accomplished researches in the same domain have supported the facts derived from this study.

Conclusion
The major article has been critically evaluated and examined to ascertain the facts, and to determine, which investigation has proved to be the most efficient, supported by scientific methods and tools available. Through the experimental approach, the requisites are met, to which other selected articles have been related and it shows that all the four articles considered in this study have supported the facts in the main paper. GL Zausen, in a different approach toward the management of the growth and physiology, actually evaluates the matter of seasonal link of those. Climate and forests are essential constituents of wealth named global flora which has an inevitable relationship identified through the investigation performed Gaylord leaded of course with the support from other researchers of the field of botany and forestry.

The study of the climatic situation and fire cases is the objective of Andrew M. Barton, who never ignores the link of climate and growth and the resin flow of pine trees. Similar is the case of the fire science article which has not been able to eliminate the relationship from its investigation. But William W. Oliver, whose aim is to study all aspects of pines, obviously does many supportive activities in his work with respect to the major article selected for this research.

The overall analytical approach adopted illuminates different facts related to tree species especially on growth, development and resin flow perspectives. The arid natures will be affected by drought and other similar extreme conditions which influence the moisture availability, eventually leading to negative effect on the growth and physiology of pine trees linking with bark beetles survival. The realistic expansion of resistance against forest fire is being explored in detail which can be utilized as an evidence for any trees around the globe. The research finds evidence in all academic sources, in the context that the slight variations in photosynthesis will never have any impact on the growth and resin flow during the growing season of pine ponderosa. As a whole, the entire work integrates the selected articles to the major article selected for investigation on pine ponderosa with evidence of scientific test results and graphs.

Different ecological diversities can closely be related to studies on pine ponderosa with strong background of experimental results obtained. Best suited one is tundra forests which has many characteristics similar to that of pines forests. The climatic and environmental influences on pine ponderosa are applicable for tundra giving better support for a research on tundra. Research activities performed in the context of pines have revealed a shift in growth, expansion and flow of resins providing an outlook on the possibilities of same for tundra. Investigations on tundra in Canadian regions can be made easy with a taking up initiatives for recognizing identical traits of tundra with pine ponderosas followed by utilizing all established results and conclusions on later groups. The same strategy can be employed for other ecological species as well by suitably discovering similar traits and characteristics with respect to pines. But the factuality that forests essentially exhibit resemblances avoiding all names and diversifications remains retained in the sectors of research on various fields of forest botany.

The paper succeeded in sorting out different helping conclusions which definitely lights up Gaylords sincere work on pine ponderosa and its botanical aspects. The utility of her research conclusions in forest botany donates much to the world of Ecology. The science and technology gets mixed up through this work to attain statistically supported and systematically finished research on a significant topic of universal interest.
The numerical tools and techniques of all researchers in common, have contributed to establish factuality of Seasonal Dynamics of tree Growth, physiology and resin defenses in a northern Arizona ponderosa pine forest by Monica L. Gaylord and her colleagues. In fact, it is evidently deduced that the growth period and the resin defense phase, along with the physiological traits of ponderosa pine of northern Arizona, are in close affinity with climatic variations over any short time span, thereby exhibiting real life distinctions in those.
Gaylord, with the support of other researchers maybe in a different path, could establish maximum scientific as well as botanical concepts on growth, development, expansion and resin flow in pine ponderosa at time contributing valuable procedures, concepts and information with respect to other fields of research on forest botany or ecology as a whole.