Significance of myosin regulatory light chain 3 in Cancer

Myosin regulatory light chain 3 (MRCL3) is a subunit of the myosin regulatory light chain which is though to play a specific role myosin regulation. This gene is coded by four exons and has similar calcium-binding domain to a retinal specific gene, Recoverin, which is involved in visual excitation and light adaption (Dizhoor, 2002). A study on the genomic structure and organization of candidate genes that causes myopia, which is a common eye disorder, showed that MRCL3 is expressed in ocular tissues of myopia patients.

However, mutation screenings showed that induced polymorphism of the gene were not associated with the disease (Scavello et al, 2005). Another study found that MRCL3 may play a role in the regulation of muscle filament assembly and reorganization in muscle cells. The study found that diphosphorylated MRCL3 is necessary for the organization of stress fibers and contractile rings during cell division (Iwasaki et al, 2001).

The role of MRLC3 in cancer is not clear although studies suggest that the function of MRCL3 in cell migration maybe be involved in cell invasion in metastatic cancer (Nguyen, Hussaini  Gonias, 1998 Tohtong et al, 2003). Cancer cell lines treated with inhibitors of myosin light chain kinase (MLCK), which regulates the phosphorylation of MRLC3, showed marked reduction of invasiveness in in vitro invasion assays. The reduction in tumor cell invasion was mainly due to impaired cellular motility (Tohtong et al, 2003). This result is consistent with the stimulation of cell migration in breast cancer cells after treatment with urokinase-type plasminogen activator which stimulates phosphorylation of MRCL3 (Nguyen, Hussaini  Gonias, 1998). While it is apparent that the role of MRLC3 in cell migration is significant in cancer metastasis and invasiveness, other cell functions of MRCL3 may be involved in pathophysiological processes involved in cancer.

Significance of Cyclophilin A in Cancer
Cyclophilin A is a member of the cyclophilin family which is a group of specific binding protein primarily localized in the cytoplasm. Cyclophins are known as the target binding protein for the immunosuppressive agent cyclosporin A (Handschumacher et al, 1984). Cyclophiln A also has enzymatic peptidyl-prolyl cistrans-isomerase (PPIase) activity necessary for protein binding in vivo. This enzymatic activity in cyclophilins is also involved in protein transport, mitochondrial function and pre-mRNA processing (Andreeva, Heads  Green, 2001 Halestrap, Connern, Griffiths  Kerr, 1997 Bourquin et al, 1997).

Cyclophilin A has been reported to be overexpressed in many cancer cells including cancer cells from human pancreatic cancer, oral squamous cancer and non-small cell lung cancer. Its role in cancer cells remains unclear although a study showed that overexpression of cyclophilin A renders cancer cells resistant to hypoxia and cisplatin-induced apoptosis (Choi et al, 2007). The report showed that induction of cyclophilin A was associated with cell hypoxia while its overexpression prevented apoptosis induced by hypoxia and cisplatin. The result suggested that cyclophilin A plays an important role in tumor growth through the suppression of the production of oxygen reactive species and depolarization of membrane potential in the mitochodria. Another study showed that overexpression of cyclophilin A directly stimulates pancreatic cancer cell proliferation through its receptor CD147 (Li et al, 2005).

The role of cyclophin A in glioma cancer cells is not known although overexpression of another member of the cyclophilin family, cyclophin D, has been shown to suppress apoptosis in glioma cancer cell lines (Machida, Ohta  Osada, 2006). Another study showed that cyclophilins were among the up-regulated proteins found in human glioma cell lines treated with an experimental antiglioma agent suggesting that cyclophilin may also play a role in the resistance of glioma cancer cells to drug-induced apoptosis (Bian et al, 2008).

Telecom Industry

Innovation has been the main driving force behind the telecom industry. Being part of a service sector it is extremely customer driven and is highly influenced by the advancements in technology. The sector involves high level of competition among several companies operating in the same markets hence innovation ends up being the key competitive edge that every company strives for.

ATT Inc.
ATT Inc. is one of the largest telecommunication companies in the world it is the fastest growing 3G network in the United States that serves more than 85 million customers. The company was formed from the merger of communications giants SBC Communication and ATT Corporation in 2005. ATT enjoys a diversified portfolio offering services from wireless phone services, broadband internet to internet based television distribution.

ATT provides a global backbone network that can carry up to 18.7 petabytes of data traffic on average daily basis. It has also introduced and facilitated the introduction of revolutionary telecommunication products such as the iPhone 3Gs which was launched in June 2009. ATT enjoys the position of the only purely IP based television services. Besides the technology based products that the company offers, ATT is also the biggest telephone directory publisher providing the product to more than 170 million customers (AT  T, 2010).

Innovation and Success
ATT has innovation as part of its manifesto. It is one of the driving forces of the companys success. It has always used innovation as one of its competitive edges it strives to stay ahead of the competition by bringing in the technological advances to offer new products to customers.

Strategy
To inculcate innovation into the ATT operation, the company channelizes new ideas and facilitates the flow of new ideas from within or even outside the company. One of such examples is the Fast-Pitch Platinum Awards that are held annually to promote and inculcate new ideas in to the companys offering to the customers.

One other example of ATTs emphasis on innovation is the ATT Innovation Center, it is an institution made by the company to constantly bring in new technologies, applications and products to the customers.
This shows that ATT not only maintains relationships with the developers from the outside but also invests a significant amount of money towards in-house development as well.

The overall strategy of the organization is to bring in new products and to promote constant innovation within the organization.

Structure
ATT enjoys is an open organization, ideas flow easily throughout the hierarchy. Innovation is encouraged at all steps and is highly appreciated. Innovation centers and competitions are regularly held within and outside the organization to ensure that the company always has something new to offer to its customers.

Vodafone
Vodafone is the biggest cellular telecommunication company in the world based on revenue the company is valued at GBP 71.2 billion. The company has operations in 40 countries of the world in 5 continents through its own or partner operations, and serves to almost 323 million customers.

Being one of the leading telecommunication companies in the world, innovation is one of the key areas of emphasis for the company. Vodafone has divided is operations into two major components one is serving the consumers that is offering products for the use of general people and the other one is providing communication solutions for businesses(Vodafone, 2010).

Innovation and Success
The business model for Vodafone is directed towards providing solutions for customers at all levels. Innovation for Vodafone means reducing cost, reducing complexity and increasing operational efficiency.
Innovation has been the major success factor of the company over the years. The way it has diversified its portfolio and its expansion in other countries is just an example. It promotes innovation at all steps and for this reason hold a number of events all along the year to ensure the flow of ideas to the company.

Strategy
The strategy of Vodafone is the transformation of businesses and life across the globe, and to provide services that revolutionize the way things are done usually and make the processes less complicated and as simple as they can possibly get.

To improve solutions to businesses worldwide Vodafone specializes in providing machine to machine solutions. The focus of the technology is to facilitate inter machine communication, and make businesses work more efficiently. It enables the business to go global and better communicate among employees. The solutions provided are over a wide range of industries that includes fleet management, point of sale solutions etc.

Besides the general industry solutions provided by the enterprise, Vodafone also provides specific services for pharmaceutical and financial services industry. Products include making the transfer of patient data and better opportunity for doctors to consult other doctors on a particular case. Vodafone also services such as SMS alerts that are proactive for customers to tell them the status of their applications for mortgage etc.

For the purpose of brining and ensuring innovation in the Vodafone operations, the company takes a number of measures. The RD department is constantly in the struggle to develop new applications and products for the customers to take advantage of.  The department is referred as the Rndbackyard, implying that it is an in-house facility that brings innovations to the organizations.

To explore opportunities outside the organization, Vodafone runs a program Betavine which is an opportunity for outside developers to learn from others who are experts, discuss their ideas on the forum to further polish them and also publish their applications. The project is directed towards bringing betterment changes in the developing world but it shows the importance received by innovation within the organization.

Structure
Being a global company Vodafone enjoys a flexible infrastructure, the organization is designed in a manner where communication is facilitated at all levels and innovation is encouraged. Ideas at each level are valued and those who stand more value are nourished.

China Mobile
China Mobile Limited is the worlds biggest telecommunication services provider of the world in terms of the number of subscribers as it serves more than 500 million customers each day.  It is one of the largest Chinese companies in the world and a matter of pride foe Chinese economy. China Mobile is the first Chinese company that is based in mainland China that is listed on the Dow Jones index of the New York Stock Exchange.

The company went multinational after acquiring Paktel a Pakistani telecommunication company in 2007. China Mobile also acquired China Tietong in 2008 expanding its operations in landline services and internet service providing(China Mobile,2010).

Innovation and Success
Innovation is again the key characteristic in the companys success. Although with China Mobile innovation is more on the infra structural level, as they have been the main player in the development of cellular networks in China. Hence the while the product diversity may not show the levels of innovation as other cellular companies discussed in this paper but what is commendable is how the company has expanded over the years yet coping up with the technological advances in the world and also contributing to them through a number of programs.

Strategy
In developing countries what has been the highlight in terms of cellular phones expansions is the innovation shown by companies, since these markets are very price driven the innovation shown was in how the customers must be attracted.
 
China mobile has been innovative in this regard as well, as major cost cutting were needed to be explored and the company quite successfully did manage to do it. One such example is the tower sharing between companies and cutting the cost on hardware facilities.

The company also values synergy for innovation, the major example for this is the Joint Invention Lab a joint venture of China Mobile, Vodafone, Verizon Wireless and Softbank Mobile. The lab will focus on evolving technologies especially the mobile technologies the idea is develop a platform where consumers from the shared pool of customers get access to the developed applications that are made by the combined intelligence of the major telecommunication service providers of the world.

Structure
China Mobile being culturally driven has strong hierarchal structure compared to other companies discussed earlier. Despite the hierarchal structure the organization values innovation and that is evident from its success and status at the moment.

Personal Area Networks

IR stands for Infrared Radiation, which is an electromagnetic radiation with a wavelength longer (but lower frequency) than that of visible light, but shorter (and the frequency higher) than that of terahertz radiation microwave. IR devices are used for both short and medium range communications and control. There must be passable straight line through space between the transmitter-source and the receiver-destination hence the mode is line-of-sight. Radio frequency (RF) technology is one of the most promising emerging technologies. For instance, its use in identification Radio-frequency identification (RFID) which is exploitation of an object (called RFID tag) applied to or incorporated into a product -animal or person- for the purpose of recognition and the tracking via radio waves. Some of the tags can be read from several meters away and beyond the light of sight of the reader itself. IR cannot pass through obstacles unlike RF such as walls hence its used for intrusion-detectors, entertainment control units (remote controls), robots control systems, cordless microphones, modems, and printers among other devices.

For a comparison between IR and RF it is important to review the standard industry uses of both technologies. In a nutshell, IR is suitable for single-room applications -use of remote control- whereas RF is most suitable for -in the local hospital- phone communication since a greater range is required. In terms of cost the IR technology is lower than RF. Wireless technology is rapidly growing thus playing an increasing role in the lives of almost everybody in the worldwide either directly or indirectly. However, wireless technology is being overused in some situations hence becoming a social bother in some instances. The next generation of wireless network architecture will be more flexible, open, and standard-based which will facilitate a smooth integration from the current existing hierarchical circuit-switched technology to peer to peer, packet switched networks.

Bullet Trajectory

The flight of a bullet through the air is determined by various factors. Trajectory is the path followed by a bullet when released from a barrel. The bullet forms projectile due to gravity and air resistance which brings bullet to a fall. There area several parameters which affect the motion of a bullet through the air. These include drag gravity, air resistance, humidity, wind and temperature. These factors affect the motion of a bullet through the air either by increasing speed or slowing it. These are considered as external factors which are covered in ballistics. The flight of bullets at some horizontal height is considered as exterior ballistic. In this case the bullet has no propulsion (Hawks, n.d). This can be traced when the bullet is leaving the muzzle.

The measure of trajectory is very crucial since it denotes the distance covered by a bullet and the path followed. This enables to align in order to hit the intended target. The trajectory is given is given in reference to line of sight. Trajectory can be determined by tracing the line of sight in relation bullets position. It can be traced by following the flight path. The line of sight determines the nature of the trajectory. A positive trajectory is obtained when the line of sight is below the position of the bullet. On the other hand, a negative trajectory is obtained when the bullet is positioned beneath the line of sight. Trajectory is very crucial since it enables determining the target. Target shooter or hunter uses the principle of trajectory to determine the striking end of the bullet. The bullet can strike on certain point which is not the aim point. The aim point and the impact point vary bringing about errors. This error may be higher than expected. The error depends on the distance of coverage of the bullet. The error for short shots is not the same as the error for long shorts.

Air resistance is a phenomenon which has to be considered in the flight of a bullet. The resistance posed by air affects the speed of a bullet. Air resistance can be overcome by considering the ballistic coefficient. The ballistic coefficient makes a bullet overcome the air resistance. The forward motion of a bullet is affected less by air resistance in the case of a higher ballistic coefficient. The ballistic coefficient determines the forward motion rendered on a bullet.

The personnel involved in military ordinance did much in revealing the projectile of bullets. This was achieved by using a standard bullet. The weight and the shape of the bullet were taken in to perspective. The personnel observed the flight of ballistics. The weight of the bullet was one pound while its diameter was 25.4cm. The shape of the bullet was semi-pointed. This offered less resistance as it moved through the air (John, 2004).

The ballistic coefficient of the standard projectile was estimated to be 1.0. The standard projectile is used to determine the ballistic coefficient in other bullets. This is given in relation to the velocity of the standard projectile. The standard projectile had a ballistic coefficient of one. The application of bullets determines its ballistic coefficient, for instance the ballistic coefficient is less than one for both target and hunting bullets. This is because they have less weight and are shorter compared to the standard bullet which produced the standard projectile whose ballistic coefficient is one. The standard projectile is in a position to make a hole through air. This can not be achieved in the case of target and hunting bullets. It is not all the target bullets whose ballistic coefficient is less than one (Nelson, n.d). There are some target bullets whose ballistic coefficient is more than one. These bullets are heavy and thus have more penetrating power through the air.

Fig Projectile motion
A bullet takes some time when realized from the muzzle to a target point. The time taken in rising is equivalent taken when falling. It takes half of the time in rising and the rest is used in falling.  The sight in distance determines the point where the bullet strikes. The bullet is released from the muzzle to a target. A bullet will strike below the aim point in the case where the target is positioned after the sight in distance. On the other hand, the bullet strikes above aim point in the case where the target is positioned nearer compared to the sight-in-distance (Hawks, n.d). This phenomenon is used by target shooters in determining the positioning of the bullet to arrive at the aim point.

The resistance caused by air is not the same. This is because some variables such as temperature, altitude, humidity and pressure. These variables determine the air density which determines the air resistance. Air resistance reduces with increase in attitude. This is because the air at greater heights has less density compared to air at lower heights. The same applies to increase in temperature. Increase in temperature makes the air molecules to expand and thus be sparsely distributed. This offers less air resistance to bullets. Increased humidity leads to reduction in the air resistance. In the case when the humidity is high, the air remains calm thus offering less air resistance. Conversely, when the humidity is reduced the air resistance increases. This is because there are many interacting variables in the air. The trajectory in this case flatters thus offering less resistance. Pressure can be defined as the force acting per unit area. Pressure increases when force is acting on a small area. Since pressure is force per unit area, increase in pressure increases the air resistance (John, 2004). The resistance offered by air is high since bullet encounters some frictional forces created by the compressed air.

The trajectory motion is also influenced by gravity. Gravity is the force which pulls things towards the center of the earth. Gravitational force is calculated by considering the mass of the object. In the case where two masses are involved, the gravity depends on the product of the masses. The flight of bullet through the air is influenced by the gravity. As the bullet projects from the muzzle, it encounters gravitational pull. This is why the bullet falls at the long run. Gravity acts on bullet even when they are not in flight. The physical support offered by the barrel is no longer available when the bullet is released from the barrel. Due to the lack of physical support, the bullet is forced to fall. The flight of bullet is also affected by air resistance since the bullets move through the air (Hawks, n.d). The flight is slowed down due to the influence of air. In order to control immediate drop, the barrel is given some alignment. This slanting compensates the sudden drop that might occur. The bullet rises at some height after leaving the barrel. The axis of the barrel is the limiting factor for the bullet.

The velocity at which the bullet is released at the barrel determines the distance covered before hitting the ground. The energy acquired by the bullet is directly proportional to the horizontal velocity. The projectiles produced by bullets at low velocity are not the same as the projectiles produced by bullets at high velocity. High velocity bullets form lower arc in reaching the target as opposed to low velocity bullets which has to produce high arc in order to reach the target. The magnitude of the arc depends on the velocity of the bullets. Bullets at high velocities cover higher distances before they are caused to fall by certain factors such as air resistance and gravity. The line of sight is used as the reference point from which the barrel is given a certain tilt. The barrel is given some tilt in order to reach the point of aim (Nelson, n.d). A trajectory is also produced in the case where the barrel is tilted directly upwards or downwards.

In conclusion, the trajectory of bullet is the same as that of objects thrown may it be balls or stones. The flight in the air is affected by several factors such as air resistance and gravity. Gravity is the force that makes objects to be pulled towards the earths center. The air resistance is reduced with increased humidity and temperature. An increase in pressure increases the air resistance acting on the bullet. The energy of impact is determined by the velocity at which the bullet leaves the barrel. The bullet is forced to fall due to gravity and air resistance.
Does the end justify the means This is the question that reigns in the field of medical biotechnology especially at the mention of embryonic stem research. With human beings being faced by diseases that are increasingly hard to treat, scientists have unearthed the potential of the stem cells in treating some of these conditions. There has however been continued controversy of whether this is a justifiable means of reaching this end. Ethical, moral and legal questions have been raised by opponents of stem cell research. On the other hand, proponents of stem cell research are guided by the potential the technology hold in relieving human suffering and countering the limitations of adult stem cells (Oxford University Press, p 5). Should stem cell research therefore be allowed or should it be discouraged by all means.

This paper takes the position that embryonic stem cell research should be allowed because it can lead to cures for a number of diseases.

Positive aspects of embryonic stem cell research
Embryonic stem cell research utilizes embryonic stem cells which are more reliable than fetal or adult cells. Among the difficulties met by stem cell researchers while using adult stem cells is the difficulty of isolating and propagating the adult stem cells. Embryonic stem cells save this situation. It is easy to isolate embryonic stem cells from the inner mass of the blastocyst which is currently the primary source of stem cell research, (Lindsay, p 1).

The ability to make embryonic stem cells produce in large numbers for about a year or more in laboratories before differentiating into new cells makes stem cells more reliable and readily available facilitating better research of human diseases. It is important to remember that embryonic stem cells are unspecialized making them fit for research as they make it easier to conduct research without inflicting any harm to human life (Lindsay, p 7).

Those who raise moral questions regarding to mistreatment of embryos need not worry since embryonic stem cells are donated and never taken out of the uterus (Guenin , para 2). Instead of discarding extra embryos resulting from vitro fertilization, couples are given the choice to donate the embryos for stem cell research which by no means contradicts morals.

Embryonic stem cell research should be allowed since embryos are not considered human beings thereby nullifying moral and ethical questions thereof. The cells are derived from blastocysts which are a group of cells that have not become actual organs therefore not qualified to be termed as human beings. After all, human life begins when heartbeat develops or when brain activity begins (Bookstrike, para 8). Having this in mind, it would be unwise to destroy approximately 18 percent of the zygotes that do not implant after conception (Bookstrike, para 6). These embryos are better off donated to stem cell research.

Allowing stem cell research helps in benefiting from these cells which create flexibility for current and potential treatments. The promise of stem cell research is high with bone marrow and umbilical blood stem cells being used to treat cancers such as leukemia and lymphoma. Injecting human neural stem cells into cancerous areas has been shown to produce cytosine deaminase that helps enzymes turn into a chemotherapeutic agent. This helps in reducing tumor mass by 81 percent (Ellis, para 8). Other than developments in cancer treatment, scientists have since 2003 been able to use embryonic stem cells to create a thin sheet of totipotent stem cells in order to cover damaged retina and restore vision (Ellis, para 20).
The flexibility of stem cell research in current and future treatment has been demonstrated by researchers from the University of California who experimented with paralyzed mice and injected human embryonic stem cells. The mice regained the ability to move and walk four months later (Ellis, para 12). In June 2005, Dr. Sheraz Daya and his researchers used stem cells to restore sight in 40 patients at the Queen Victoria Hospital of Sussex in England (Ellis, para 17).

Negative aspect
Opponents of embryonic stem cell research have repeatedly argued that embryos are human beings and therefore destroying an embryo amounts to killing human life. This issue however requires clearing the point at which human life begins during embryonic development. As such, this argument is not valid since embryos are not considered human beings until they are two weeks old or they show signs of the first nervous system activity (Fox News, para 10). Embryonic stem cell research is carried using blastocysts which are only a few days old.

Conclusion
There is no doubt that allowing embryonic stem cell research holds promise to curing a number of diseases. Reliability of embryonic stem cells as opposed to fetal or adult cells is a great benefit to reap. Since blastocysts that are not yet human beings and are donated from in vitro fertilization, moral and ethical questions are resolved. We stand to get treatment for current conditions that pose threat to human beings. In view of this, embryonic stem cell research is a justifiable means towards the end of relieving human suffering.

HIV TO AIDS

One of the most common definitions of disease is lack of ease in any particular area of the body. Some disease can be cured and some can not. Some have cure and some do not have any cure. In this report one such disease is elaborated, which when occurred can not be cured. And that disease is AIDS. AIDS is an acronym and its full form is Acquired Immune Deficiency Syndrome.

Acquired means that one can be infected with it.
Immune Deficiency means weakness of the system in body which fights against diseases.
Syndrome means health problem group which make disease.

Human Immunodeficiency Virus (HIV) causes AIDS. Persons body fight against the infection if someone is infected by HIV. Body will make special molecules named as antibodies to fight against HIV. These antibodies are special type of proteins which are produced by bodys Immune system to fight HIV disease. When blood test for HIV is conducted these antibodies are searched in blood. If there are antibodies present in blood, it means that HIV has affected that person and person with such antibodies in blood are called HIV Positive. However, viral load test is another advanced test which measures virus itself rather than searching for antibodies in blood, produced by immune system.

Having AIDS and being HIV positive or HIV possession in blood is different. Numerous HIV positive people do not get sick for many years.  Damage to immune system increases as HIV disease progresses. HIV it self does nothing to a person or to persons body except damaging the immune system. Immune system is special kind of security guard of human body which protects human body from diseases. Opportunistic infections which in presence of sound immune system usually do nothing but can make a person sick when immune system is damaged. Human body consists of germs like protozoa, bacteria, fungi, etc. With normal immune system these germs usually cause no harm to a person but when immune system is damaged these germs are uncontrollable which can make person sick and these germs are called opportunistic infections.

Usually a person does not know when he gets infected with HIV. After HIV infection with in few weeks or days HIV amount gets very high. Due to which some people suffer from headache, fever, stomach pain, skin rash or inflated lymph glands for two or three weeks. Most people think that it is nothing more than flu. But, it is indication of the first stage of HIV disease and it is named as primary HIV infection or acute HIV infection. More than half of the people who are infected by HIV do not notice it as HIV or do not think that these could be the symptoms of HIV.  If any one has such symptoms and there was any chance to be exposed to HIV disease than one must consult doctor so that he may know the severity of the disease.

If someone is very recently infected with HIV, his normal HIV blood test will be negative because normal HIV blood test looks for antibodies which are produced by immune system to fight HIV disease and immune system usually takes two months to  produce these antibodies to fight HIV. But more advanced viral load test can help in the detection of HIV disease at early stage as well. Viral load test measures the virus it self. At very early stage, HIV multiplies very rapidly because initially no antibodies are produced by immune system to fight against HIV disease. Therefore, at early stage (during acute infection) viral load test will show very high viral load. Negative HIV antibody test and very high viral load is indication that a person has been infected probably in last two months by HIV disease. If person is infected for more than two months his HIV antibody test and viral load test will both be positive but viral load will not be as high as it would have been in first two months.

Many people may think that if HIV disease is in its initial stage it will not cause much harm. They think, damage to immune system will be cured by Antiretroviral Therapy (ART). It is true that after first flu like symptoms some people stay healthy for ten or more years. But it does not mean that HIV is not causing harm to their immune system. During this period HIV disease continuously damages their immune system. In fact, at initial stage HIV particles are much higher initially than at later stage. Exposure of recently infected persons blood who is in acute infection stage is more harmful than exposure of persons blood who is infected for long time with HIV. Research has shown that infection risk is 20 times higher while the person is in acute HIV infection stage than later stage infected person.

CD4 are type of lymphocyte (white blood cell) cells. These cells are one of the most important parts of immune system. Function of CD4 cell is to fight and kill disease which may cause harm to human body. For normal person quantity of CD4 cells ranges from 500 to 1600 cubic millimeter of blood. When a person is infected by HIV most damage is done to CD4 cells. This virus becomes part of the cell and when cell increases its strength to fight against the infection, more copies of HIV are also made. When someone is infected by HIV number of CD4 cells goes down. This is indication of immune system getting weak. Lower the amount of CD4 cells, persons frequency of getting sick will increase. Reduction in the number of CD4 cells is also an indication of HIV.

According to research up to 60  of CD4 cells are infected at the early stage of HIV which is also called acute infection. The lining of intestine also damages very quickly. HIV also reduces the ability to replace lost CD4 cells. It is also possible that these problems may occur before a person is tested positive for HIV.
Without treatment CD4 cell count will go down drastically and in some cases can even reach zero. Initially HIV disease appears in the form of fevers, diarrhea night sweats or swollen lymph nodes and will last from few days to several weeks. Immune system initially produces white blood cells which kill HIV infected cells. If HIV infected person will not use antiretroviral drugs (ARVs), as the time will pass his immune system will stop producing white blood cells which kill HIV disease. Guidelines to take HIV medication can not be given until there are signs of immune system damage. By using ARVs during acute HIV infection can help immune system to produce white blood cells for longer time and can also help in protection of immune system.
Although ARVs are helpful to protect immune system even than to take ARVs is not an easy decision to make. Taking ART changes persons life. Missing dose makes virus easier to develop resistance to medication, which can limit future treatment options. Medication for HIV is very strong, expensive and has side effects which makes difficult for a person to live for long time.  People with more than 40 years of age have weak immune system. They do not respond as well to ARVs as younger people do. However, every one with HIV not gets sick right away. Some people with CD4 cell count over 350 and viral load under 20000 has 50 chances of staying healthy for 6 to 9 years even if they do not take ARVs. Most health providers begin ART when CD4 count goes below 350. Some health providers also use percentage for the measure of CD4 cells. If CD4 percentage goes less than 15, they start ART even if CD4 count is high. Some health specialists wait until the count reaches close to 200.  It is not really clear when to really start taking ART. Most doctors consider three things first is viral load, second CD4 cell count and third is any symptoms person has had. Usually ART starts when viral load is over 10000, if CD4 count less than 350 or if there are any symptoms of HIV. One thing about ARVs should be clear that they do not kill the virus, it just slow down the process of immune system damage.

As mentioned above that HIV is not AIDS. So question arises that what is AIDS In simple words when HIV reaches to its final stages it changes to AIDS. If a person has CD4 cells less than 200 or percentage of CD4 cells is less than 14 than a person is suffering from AIDS. You have AIDS if a person gets opportunistic infections.  There are HIV virus slowing agents but there is no cure for AIDS. No medication can sweep HIV out of human body completely. Although, drugs can be used to treat most opportunistic infections during AIDS easily. However, opportunistic infections are very difficult to be treated.

Some most recent statistics related to AIDS are mentioned as follows which shows people living with HIV, new HIV infected persons, deaths due to HIV disease for different age groups for year 2008. And statistics has been taken from UNAIDS website.

Number of people living with HIV in 2008
Total 33.4 million 31.1 million35.8 million
Adults 31.3 million 29.2 million33.7 million
Women 15.7 million 14.2 million17.2 million
Children under 15 years 2.1 million 1.2 million2.9 million

People newly infected with HIV in 2008
Total 2.7 million 2.4 million3.0 million
Adults 2.3 million 2.0 million2.5 million
Children under 15 years 430 000 240 000610 000

AIDS-related deaths in 2008
Total 2.0 million 1.7 million2.4 million
Adults 1.7 million 1.4 million2.1 million
Children under 15 years 280 000 150 000410 000

Computer Programmer As A Career

Within a span of just a few years, the entire globe has seen many technological advances. It indeed is true that different sections of the globe has seen varying levels of growth, but there has been one thing in common and that is growth. Just a few years back, when there was no technological revolution in place, people had to stand in never ending queues for almost every other thing, be it paying some utility bill or doing some account transfer or even a query as small as the accounts balance enquiry. Individuals had no option but to spend hours for booking the railway tickets and make a million phone calls to enquire about hotel room reservations for a place hundreds of miles away.  Then we jumped onto the era of information technology where most of the needs of the day are resolved using a click of a mouse itself.  Few individuals could now be seen doing the transactions standing at the bank.

We now do not have to wait for our turn to know our account balance and do not have to wait for the banks to open in the morning in order to withdraw cash for emergencies. We need not carry huge amount of cash in the pocket and risk theft but can swipe our cards at any place in the world. We do not have to physically be present at the bank premises to open a fixed deposit account and we can transfer millions from one account to one account irrespective of the accounts located in diagonally opposite side of the globe. Thanks to the software companies present in the financial maintenance who have done this vast wonders for us.

In order to gain information on any subject, in fact even researching the career path, the very first thing that an individual does these days is to google the best possible option for him. For every single question boggling anybodys mind, google has now started to come as one of the first way to find a solution. Information technology has enabled every comfort at our door step with presenting the user an access to online ticket booking, access to the latest movies and even shopping. Softwares are getting developed in the market which can reduce the need of doctors to a bare minimum level by tracking the symptoms of various diseases and the available medicines for their cure. Unmanned fighter planes are nothing new these days where the complete fight operation is performed by the fighter plane which has an integrated computer system remotely controlled by an operator. Some of the other software companies in various important sectors apart from those listed above are-

Offering the automation of all insurance claims to the customers thus saving him from the pain of arranging cash for the hospital while he is still in the hospital.
Maintenance of accounts of different organizations.
Maintaining Customer Relationship with the help of auto response that includes doing customer communication with the help of letters and statements.
Websites and chat servers allowing social networking including online gaming.
Softwares designed to perform online stock trading and the list goes on.

Obviously Information Technology has been the backbone of the growth for the world as a whole.. These software companies employ sharp minded software engineers who and these software engineer being the architect of this exponential growth. For most of the individuals nothing could be as exiting as leading the world to the forefront of this development.

The field of computer systems can be broadly divided into two sub fields-
Software systems
Hardware systems.

While software system includes machine level representation of data and softwares, controlling the program execution, programming, development of various software products, packages, designing of the operating system, the hardware technology involves designing of various logic processors, organization of machine and devices in different technology like the very large scale integration technology.

As apart of our study we will concentrate on the software side of the computer field. Important skills that a computer programmer must posses include good analytical skills, reasoning, logical ability, paying attention to details, good level of math, and a good written and oral comprehension skill. The training generally required for this field is a bachelors degree. One of the other fundamental requirements of a programmer is knowledge of programming languages. Having knowledge of multiple programming languages is an big plus positives and multiplies the possibility of fetching a good job for the programmer. The skills mentioned above are achieved primarily by a four year bachelor degree course at any vocational schools or universities. More than 23 of the programmers have a bachelor degree and their preferred subjects include a degree in mathematics, computer science, information technology or engineering.

If we go by general logic then the remuneration earned by an individual who carves the success of an organization by acting as an anchor for the growth, should be good by any standard. We will have a look at the statistics presented by various groups in order to support this claim.

If we look at the statistics of the number of jobs and an average payout then there were approximately 1.71 million computer experts in United States in the year 2002 who were earning more than fifty thousand US dollars per year. The computer experts include computer analysts, computer scientist, database administrators, system administrators and software engineers and are counted on the basis of data published by the United States Bureau of Labor Statistics.

As per the United States Bureau of Labor Statistics (BLS), following is the breakdown of employees
BLS Job Category Total Employed 90 above 75 above 50 above 25 above 10 above Est. Above 50kcomputer programmers 499,000 35,080 45,960 60,290 78,140 96,860 339,080 computer application software engineers 394,000 44,830 55,510 70,900 88,660 109,800 325,991 computer systems analysts 468,000 39,270 49,500 62,890 78,350 93,400 346,631 computer systems software engineers 281,000 45,890 58,500 74,040 91,160 111,600 239,162 network and computer systems administrators 251,000 34,460 43,290 54,810 69,530 86,440 151,700 network systems and data communications analysts 186,000 34,880 44,850 58,420 74,290 92,110 121,853 other computer specialists 192,000   54,070   103,226 database administrators 110,000 30,750 40,550 55,480 75,100 92910 65,094 computer and information scientists 23,000 42,890 58,630 77,750 98,490 121,640 19,142 Approximate Number Of Computer Specialists Earning Over 50,000yr in 2002 1,711,879

As per the data from City TownInfo.com, the median salary for computer professionals in the year 2008 was as high as 69,620 U.S. dollars in the year 2008 and it terms the unemployment rate as compared to the other sectors as fairly low. Apart from this, the company also suggests that as high as four hundred thirty five thousand individuals were employed in the sector in the year 2006 in which 73 of the programmers had at least a bachelor degree attached to their names.

If we look at the lifestyle of a computer professional working in a multi national company then he has to spend almost 40 hours in a week working for the organization. The work place is generally cozy and comfortable and equipped with almost all the modern amenities. A novice software engineer generally starts his career with the designation Associate Software engineer and gets an on job training to perform his roles and duties. He is then promoted to act as a software engineer and then Senior Software Engineer in a span of next 3-4 years depending upon his performance and the rules of the organization. After an experience of around 5-6 years he generally gets promoted to a level of Team Leader in which he is generally allocated a team of 8-10 people, once again depending upon the employee strength and various other factors specific to the organization.  Needless to say there are decent salary hikes at every rung of this path. After an experience of 7-8 years an employee is generally promoted to Associate Project manager and then project manager. A project manager generally handles a team size of 100 under various different projects headed by various different team leaders and he also generally starts his career as a simple software engineer, although having an experience of around 12-14 years in total. Apart from managing the team, the other primary responsibility of the manager includes satisfying the team members while working in the best interest of the company by providing good level of satisfaction to the client. Moreover the budgeting analysis as well as convincing the customers for the expansion of project comes under one of the responsibilities of the project manager. Salary of a project manager is generally 8-10 times the salary of a software engineer and the project reports directly to the businesses head. Fetching the projects for the company in various sectors and setting the targets for resource acquisition is the key responsibility of a business head who directly reports to the CEO of the company. At the level of business head and CEO, an additional degree of management is also desirable but they still have to have a good exposure in the field of computer programming. So indirectly computer programming is the stepping stone to even higher designation in any software company.  The path mentioned above is good for anyone who is intending to serve a software based company starting from an associate software engineer after a bachelor degree.

Apart from serving any particular organization, some of the computer programmers also prefer a path of contract jobs or act as consultants for the company. For companies too, employing programmers through contractual agreements rather than going for permanent employees is a good way of harnessing people with the specific skills needed for the job, without the trouble of training or re-training newcomers. Contractual agreements range from a few weeks to even as long as a few years. However, the contract employees charge marginally higher than the regular employees. Some of the computer programmers also foray into the field of training institutes by starting education for the young guns.

Being a computer programmer is fun at the work place by application of mind in new directions at every second to develop a new idea that can change your life. As per my opinion, being a computer programmer is one of the most attractive jobs at the moment because not only does it offer good growth prospects for oneself, but also it offers an individual the opportunity to pay back to the society with the help of new innovations thus serving the entire mankind

Climate Change and Global Warming

Global warming is the continuous rise in the mean temperatures of the earths atmosphere and oceans. Global warming and its associated impact of human life have been at the centre of intense debate over the last couple of decades. Climate change is the gradual change in the climatic conditions over the earth. While global warming is largely driven by human activities associated with the release of greenhouse gases into the atmosphere, climate change has been happening since long before man inhabited the surface of the earth (EIA, n.d).

Despite the concern, a study conducted by the Pew Research Center in 2009 revealed that there was a sharp decline in the percentage of Americans who say there is solid evidence that global temperatures are rising (Pew Research Center, 2009). Another research conducted in December found that Americans were more concerned with the economy and healthy issues while climate comes at a distant third (PEJ, 2009). Yet even as concern over the seriousness of the problem of global warming wanes, there is sufficient evidence of the gravity of the problem and its impact on human life (Greenpeace, n.d).

Scientists estimate that the average earth temperature could rise by an additional 7 degrees Fahrenheit by the end of the 21st Century unless serious efforts are taken to slow down the emission of greenhouse gases into the atmosphere through combustion of fossil fuels and other toxic substances (NRDC, n.d). Ironically, while Americans get more worried about their economy and their health, global warming intensifies their worries indirectly through its effects on the economy and human health. Global warming is associated with heavy costs in the form of increased instances of disease, unpredictable weather patterns and higher frequency and severity of wildfires, death of plants and animals, and other costly effects (NRDC, n.d). Global warming thus has an important role to play in the economic wellbeing and health of human beings. It should therefore be accorded utmost attention.

Pollution

Pollution refers to the introduction of foreign bodies into the environment thereby causing instability, harm, discomfort or disorder to the environment or ecosystem to which the contaminants are introduced. Pollution can be in the form of energy or chemical substances. The energy forms of pollution include heat, noise and light. The pollution elements known as pollutants are substances that are foreign energies or substances, or they occur naturally. When such energies and substances occur naturally, they can only be considered to be pollutants when their presence exceeds the levels that occur naturally (Sell, 1992 Theodore et al, 1994).

Forms of pollution
There are several types of pollution that take place on a daily basis leading to the presence of high levels of pollutants in the environment. Air pollution is the most common of all the other types of pollution that affects the environment severely. Air pollution takes place when particulates and chemicals are introduced into the atmosphere through various processes such as industrial emissions and the emission of nitrogen oxides by the motor vehicles. This form of pollution has a lot of impact on the atmosphere thus affecting its natural processes. Some of the pollutants introduced into the atmosphere react with the sunlight producing harmful products. Since the human beings, animals and plants breathe in air from the atmosphere, the presence of pollutants makes the air to be contaminated and thus be of low quality. Breathing polluted air on a regular basis for extended periods of time can have dire consequences. The living organisms taking in such air can easily contract various diseases as well as weaken their body systems making them to be more vulnerable to diseases (Colls, 2002).

High levels of air pollution lead to poor visibility due to the presence of artificial fog in the atmosphere. Poor visibility has been cited to be one of the major causes of accidents both in road transport and air. Air pollution that mainly leads to poor visibility is when industries produce a lot of pollutants in the form of smoke that is dark into the atmosphere. This type of pollution is most common in urban centers especially of developed nations where there are several industries that are producing thousands of tones of such smoke. This therefore implies that accidents are more likely to be caused by poor visibility in such regions as compared to the rural areas. In nations where this form of pollution is rampant and there is likelihood of several accidents taking place, then lighting must be provided in order to improve visibility and thus reduce the probability of accidents taking place due to poor visibility (Harrison, 2001 United Nations Economic Commission for Europe, 2007).

Ocean pollution is one of the main problems affecting the global oceans. This pollution affects directly the organisms that live in the oceans and affects indirectly the human resources and health. Toxic wastes, oil spills and the dumping of several harmful materials are the chief sources of the pollutants that pollute the oceans. Noise as one of the ocean pollutants greatly affects the marine animals. Most of these animals especially fish and the marine mammals are highly sensitive to noise or any other form of sound. Underwater, it is possible for noise to travel for very long distances, covering large ocean regions and thus potentially preventing the marine animals that are sensitive to sound from hearing either their predators or prey, getting their way or linking with their mates. In fact, as a result of noise pollution in the oceans, the population of dolphins and whales has greatly reduced (Weilgart, n.d).

Oil spillage is a major source of ocean pollution, and it has very adverse effects on the marine life. As a result of oil spillage in the oceans, the marine water is deprived of air circulation thus making the marine life to die in huge numbers due to lack of air to breath. The feeding system of the marine animals is also interfered with as the food sources are destroyed by this form of pollution. The toxic pollutants that are usually disposed off into the global oceans have very adverse effects both directly and indirectly. The marine life is affected directly by these toxic elements that are introduced into their habitat, making them to either die or to suffer from several diseases. Some of the toxic elements that are introduced into the oceans are consumed by the marine life, which are in turn consumed by human beings making them to suffer indirectly as a result of ocean pollution (Advameg, Inc, 2009).

Soil pollution is yet another form of pollution that is widespread world over and which has very adverse effects. It mainly takes place when various pollutants and other contaminants are introduced into the soil via various processes. Some of the pollutants introduced into the soil take a long period of time before they can be depleted and thus their effects are felt over much longer periods of time. Pollutants introduced into the soil usually lower the quality of the soil making it to be less productive, especially when used for agriculture. Furthermore, there are pollutants that are absorbed by plants and are therefore taken in when such plants are either consumed by animals or humans as food. This implies that some of the pollutants that are introduced into the soil are not only harmful to the plants, but they are also indirectly harmful to the consumers of such plants (Siegfried Fred Singer, American Association for the Advancement of Science, 1970).

The above three forms of pollution that have been discussed are instances where pollutants are introduced into the environment or an ecosystem. There are also other forms of pollution which are in the form of energy. Noise is the major form of pollution under this category it encompasses aircraft noise, roadway noise and sonar of high intensity. Light is also another form of energy pollution and it includes astronomical inference, over illumination and light trespass. Visual pollution occurs when there are objects that are placed in such a way as to interrupt good visibility. It can occur due to the presence of motorway billboards, overhead power lines and scarred landforms (Dales, 2002 Shiva, 2002).

Conclusion
Presence of pollutants in an environment or ecosystem interferes with its natural processes, hence creating some imbalance. In virtually all cases, pollution leads to very harmful effects and should therefore be minimized as much as possible. There are usually very high environmental costs that have to be met by governments and other organizations because of pollution. Such environmental costs can be a great impediment to the economic growth of a country especially when they are inevitable. Therefore, governments around the world should take pollution much more seriously and deal with it in an objective manner in order to prevent the menace that results from pollution.

The Benefits and Pitfalls of Using Web Services

Web services offer ample benefits to develop business in the modern scenario. The advantages of web service technology outweigh the disadvantages, however, issues are to be constantly addressed to maintain standards. Cost effectiveness, code reuse, application  data integration, and versatility best attract business organizations in adopting web services in their growth path (Cavanaugh, 2006, p. 7).

Web service brings immediate response to an organization. The scope for outsourcing will be more with the service oriented architecture in place (Greer Jr, 2006, p. 17). Ad hoc business relationships and creation of potential marketplaces are the result of web services. The ability to integrate business solutions in the growing technology and the ability to adapt to the new technology for the changing needs of business are the positives achieved through web services. Ultimately, it is all about improving business performance. Improving the business by optimizing cost, automating business processes, adapting to change and by taking new markets are all possible via web services. Web services benefit the growing organizations by via
(1) Reduced integration cost, (2) improved return on investment (ROI) in existing systems, (3) increased application portfolio agility, (4) enhanced IT operational efficiency, (5) shorter application time to market, (6) easier merger and acquisition (M  A) activity. (7) reduced supply chain fiction, (8) service oriented architectures (SOA) feasibility, (9) automated and orchestrated business processes, and (10) enterprise transformation (Greer Jr, 2006, p. 21).

Code reuse is an advantage of web services. This maintains that one service might be placed by several clients to fulfill different business objectives. The services proved are cost effective mainly because of its open standards. WSDL is standard based and it prevents developers from inserting any language specific constructs. Interoperability is the rationale behind using web services. The need to create highly customized applications for integrating data is avoided thereby providing easy interoperability through web services. The associated learning curve is relatively smaller. The investment on additional technology is always kept to a minimum due to the fact that the protocols already exist everywhere.

Web service based applications mostly support e-business processes (Lee, 2008, p. 188). The application development trends are categorized as (1) enterprise application integration, (2) interoperability with key business partners, and (3) interoperability across multiple enterprises.

An important factor in the implementation of the service reuse program is the relationship between suppliers and consumers. It is important to examine the issues related to web service from the perspective of all the stakeholders namely web service providers, web service consumers, and standards organizations. The relationships between these three have to be understood at all levels (Lee, 2008, p. 189). The challenges in integration are mainly categorized as technical and managerial. These challenges are presented from the three stakeholders perspective.  The technical challenges are (1) service description and profile, (2) web services accessibility and documentation, (3) architecture standards and infrastructure, (4) design requirements, and (5) web service evolution (Lee, 2008, p. 193).

The managerial challenges are (1) pricing and quality of service commitments, (2) identifying new services, (3) customer feedback and support, (4) partnerships with third party providers, and (5) demand management and liability (Lee, 2008, p. 195).

The web services for users look simple but actually developing and implementing them are more challenging. For example, writing codes may be a complicated process. There are tools that auto-generate WSDL code. To meet the challenges of modern web services, Altova has created a suit of tools for designing and building web services in a graphical manner. This allows developers to build well designed, standards-conformant, interoperable web services without manual coding. Here developers can build their WSDL files graphically through validation and editing help. The WSDL codes are generated behind the scene where it can be validated and edited later. Altova helps in mapping and automation process to get the corresponding data sources.

Altova XML Spy and MapForce help to create web services from start to finish in a visual manner thereby making the development quicker and reducing errors introduced by manual coding (Yu et al., 2006, p.20).
Tremendous business opportunities are wide open with the help of XML based web services. We can integrate our software with any other piece of software. We can run it on any machine. This ease of integration enables tremendous business opportunities in efficient manner. One can develop applications much faster.

There is no external data source required. We can request and get information at real-time and transform it into our own format. With the advent of XML technology, web services readily make information available for anyone at any time, at any place and on any smart device. Dollar rent a car is a website program built, tested and deployed in a record speed of time that translates reservation requests and data between the companys mainframe business edition system in an airline partners UNIX servers. The same integration model can be reused with any business partners. Expedia.com is a web site that finds the lowest price itineraries. These itineraries are in the process of getting converted to communication centers where users get timely information through web services (Benefits of web Services, p.1).

According to REST (Representational State Transfer), each unique URL is a representation of some object. Yahoos web services including Flicker, del.icio.us API use REST. Pubsub, technorati, eBay and Amazon have web services for both REST and SOAP. Google implements its IP services to use SOAP. REST is a more trendy way of creating web services. The advantages of REST web services are (1) Light weight - not a lot of extra xml markup, (2) Human Readable Results, (3) Easy to build - no toolkits required. The advantages of SOAP are (1) Easy to consume, (2) Rigid, and (3) Development tools. Googles AdWords web service uses SOAP headers and is really hard to consume. On the other side, Amazons REST web service can sometimes be tricky to parse because it is highly nested and the result schema can vary quite a bit based on what you search for. Whichever architecture is chosen, it has to be easy for developers to access and well documented (REST vs SOAP Web Services, 2005, p.1).

XML is simple, reliable and easy to blend existing systems with new applications.
The plain text protocols use verbose method to identify data. So the service requests are larger than requests encoded with a binary protocol. The extra size becomes a hindrance over low-speed connections or when the connection is extremely busy. HTTP and HTTPS are not meant for long-term sessions. After making requests, they get disconnected in a specified time. Whereas in a CORBA environment, the server may stay connected for an extended period of time. HTTP and HTTPS protocols are stateless. The client and server does not know each other when no data is exchanged or during a power outage. So there needs to be a way to track from when the client is disconnected.

The server assumes that the client is inactive if it does not receive any request after a predetermined amount of time and it removes the information it gathered. This extra overhead means more work for web service developers.

Web site provides large customer bases. It is available 24 hours a day keeping the option of doing business with client and server exchanging data back and forth. Websites are environmentally friendly, reduces staffing, provides online shopping experience, etc.

Though interoperability is a major concern, quality of web services, web service management, and security are issues haunting web services.  A major challenge the web service have today is to develop a security and privacy mechanism to fulfill the requirements (Yu et al., 2006, p.2). The foundational underpinning is overlooked and the focus is more on technological aspects in deploying web services. The web service management system (WSMS) is a comprehensive remark for the web service life cycle including developing, deploying, publishing, discovering, composing, monitoring, and optimizing access to web services (Yu et al., 2006, p.3). Issues pertaining to several web services and individual web services such as security protocols and policies, quality of web service for optimization purposes have to be settled before they are combined (Yu et al., 2006, p.4). Selecting only individual services is not enough since some services may be related to each other. Combining properties of multiple services is an important issue.

Security is one of the major issues in deploying web services. Authentication, authorization, confidentiality, and integrity are to be maintained to resolve security issues (Yu et al., 2006, p. 9).

A Netherlands based web services software maker started a web services package called Cordys for enterprise applications. The specialty of Cordys is its unified architecture approach. It creates a new business logic and is built on XML and open standards.

Web services are adopted along two parallel paths. The IT department is driving the first part that has very few initiatives involving production deployments of the technology. Non-technology line executives are driving the second adoption path which is more prevalent to date. The rapid pace of adoption by line executives is encouraging because it confirms that the real economic value proposition of the technology is compelling. The executives are focused on near term business impact. In other words they are more opportunistic in terms of business. Altogether, from a business perspective, the early adoption of web service technology is largely ah hoc and opportunistic.  These deployments may solve near term business problems but without a broader architectural vision towards long-term value-creation. Web services technology certainly will deliver near-term business impact. The challenge lies in maximizing this near-term impact while also building the foundations for the real economic prize the longer-term business value creation opportunities created by the deployment of SOA architectures (View Point, 2003). It difficult to predict the future of web services. Zap Think, believes that Web Services Revolution will be earth shattering when it arrives in full force (Greer Jr, 2006, p. 25).

Computer uses in Society

In every sphere of life, computers play a major part in information processing and storage of large volumes of data. In the medical field, computers have been used as means for storage of patients information, diagnosis and research papers, medical journals and important medical documents. As Oak states, This information can be effectively stored in a computer database. Unlike before, doctors no longer have to memorize what medication each patient is on  and the amount needed for every medication.

With computers, patient monitoring has also been made easy and efficient. These patients are now monitored before, during and after procedures. As Candi states, Prior to their advent, it was required that a nurse physically monitor patients. This would sometimes lead to death if the nurse took longer before the next examination. Computer software can also be used in examination of internal body organs. Previously, a patient had to trust the doctors word when seeking medication but can now see internal body organs through radiography scans.

Computer revolution in the field of environmental science has ensured optimum consumption of natural resources. Environmental science and technology is thus gaining attention worldwide (Environmental Science  Technology). Computers are now universal in the workplace (Belford, par 68). For example, telephone systems are multilevel computer networks as opposed to operators linking cables by hand. In the banking industry, errors in monthly statements have reduced because deposits and withdrawals are logged into a customers account right away rather than manual calculations. Checkout counters in retail stores use a bar-code scanner and not a cash register (which requires the clerk to type in the code for the item).

The invention of Computer Comparison Statistics (CompStat) process has helped in managing police operations. The process examines crime, its effect in the community and the police department. The ability and speed of law enforcement agencies to arrest criminals has increased due to computer technology. As Ray states, Advanced global positioning satellite technology and cell phone ubiquity has provided law enforcement officials with additional resources to track and investigate criminal activity.

Throughout history, internet has been the most exceptional discovery in the field of communication. Its advantages are easy and better communication, availability of huge amount of information, provision of all-around entertainment and commerce. Internet disadvantages include theft of personal information, receiving unwanted bulk messages, pornography and virus threat to computers. Internet may inspire insightful thoughts as it sometimes does not provide first hand answers but help in doing more research. Use of internet for school projects should be limited and not banned. This is because of its advantages that outweigh the disadvantages. As Matthew states, Yet, from what Ive seen, the educational benefits of online access are worth it. Yes, parents have to be vigilant.
The website Survive 2012 lists one occurrence as the most predictable disaster and the leading candidate for a 2012 doomsday scenario a collision between earth and a galactic visitor. A large percentage of 2012 theorists expect a replay of the cosmic event believed to be responsible for the extinction of the dinosaurs. Either a comet, asteroid, or even a heretofore unknown planet will play a devastating game of irresistible force meets immovable object.. Fueling such assertions is the possible existence of dark comets, which contain no surface ice and therefore would not be visible to human observers.

Any gravity simulator would demonstrate the slim possibility of a collision event. Even rogue astral bodies still travel in an orbit, and any threats orbit would have to coincide precisely with Earths orbit with a margin of error of less than 0.001 percent. Any such object would also likely require a slow speed in order for another objects gravitational pull to attract it, and a slow-moving object is more likely to announce its presence years in advance. Regarding the comet threat, scientists estimate that roughly ninety percent of comets passing within range of the solar system are known and documented. As for asteroids, astronomers project the next close encounter in 2036. Morever, this threat poses an approximated 1 in 450,000 chance of impact.

Although collision odds may be negligible, could even a near-miss kickstart a cataclysmic chain of events
According to proponents of the geomagnetic reversal theory, the planets North and South poles are due for a switch-up in 2012. Most theory proponents hypothesize that every 500,000 to 750,000 years, the poles reverse themselves, with the North in effect becoming the South Pole and vice versa. These magnetized areas anchor Earths axis of rotation, and are believed to retain their magnetic capabilities from either liquid iron in the planets core, salt in the worlds oceans, or crustequator rotational differences. Any outside disturbance of these systems could upset polar stability. As one might imagine, a shift in this planetary mechanism (either natural or induced) could facilitate significant global changes. Proposed worst-case disasters have included everything from supervolcano eruptions to the onset of a new ice age. Reversal supporters already see evidence of a polar shift. Some claim certain locations where compasses will offer a northern reading of South Pole-based Antarctica. Scholar Jessica Quinn states quote. Further, geologists have uncovered ancient fossilized palm treesin Alaska.  Independent empirical studies courtesy of Princeton Universitys Adam Maloof  and Paul Sabatier Universitys Galen Halverson also seem to support the geomagnetic reversal theory.  

Indeed, many scientists acknowledge that geomagnetic reversals have likely occurred, perhaps many times, in Earths history. Yet these researchers will dispute the length and severity of a polar shift event. For one, they say, a reversal would not take place in one catastrophic jolt, but instead would unfold over thousands of years. As such, any surface global changes would occur more gradually and may have been in effect for many years. (Bradden quote)  Subtle changes and slower shifting of Earths tectonic plates would be unlikely to produce the type of doomsday disasters depicted in various fictional works. As with many of their theories, 2012 proponents may have mixed elements of fact with generous doses of fantastic fiction.

A massive solar flare served as the catalyst for doomsday in the recent motion pictures 2012 and Knowing. In 2006, NASA released a report seemingly confirming such a fear 2012 will herald the next solar maximum. Scientists characterize this phenomenon as a period of frenetic solar activity, complete with intense sunspot cycles and swept upmagnetic fields courtesy of the suns weather-controller, its conveyor belt. Theorists almost immediately connected this forecast with the belief that a CME (coronal mass ejection)or monstrous solar flarewould cause earths imminent doom. A solar storm in the mid-nineteenth century facilitated telegraph system failures across the world. In todays era of satellites and electronic power grids, such an event on a larger scale could lead to debilitating impacts on global economic and electronic systems. One scholar discovered a parallel between solar maximums and increased natural disasters, wars, and even stock market falls. Nearly four years before NASAs solar prediction, he also foretold a strong period of solar activity around 2012.
(NASA quote)

Scientists see the dangers as decidedly less than cataclysmic. For one, they say, the next solar maximum could occur as soon as one year before or as late as two years after the 2012 projected date. In addition, the so-called solar maximum might very well amount to a barely blip-worthy solar minimum, with weak solar activity. However, they contend, even the most fantastic sunstorm would only temporarily affect satellite communications, at the very most. Judging by long-term studies of similar stars, our Sun is simply not at a stage in its development where it could even emit an extinction-level type of ejectionor an ejection that would significantly impair our planet, for that matter.

According to one theory, 2012 will mark our solar systems alignment with the center of the Milky Way Galaxy.  Such an even only occurs once every thousands of years, which makes it an event for astronomersand dangerous for Earth. Since a powerful magnetic force (a black hole) rests at the galaxys equator , a system alignment would create havoc. (Bradden quote) Earths close proximity to these magnetic filaments is already evidenced by the increasing planetary changes we are witnessing even today, characterized by devastating natural disasters and global climactic shifts. 2012 will facilitate a close encounter with this force that could be life-altering, or life-ending. ..(2012s astronomical significance with mathematician Sergey Smelyakov), Scholar and astrologer John Mayer Jenkins believes the winter solstice of December 2012 will signal the onset of a new astrological age  the Age of Aquarius, or the final Age. During this time, the orbital plane of the solar system will align with a cluster of dust clouds in the galaxy, known as the Great Rift, which may in turn strengthen the gravitational pull from galaxys center. (Mayans and Venus, locate) More extreme versions of the galactic alignment theory predict a catastrophic superwave of energy will emit from an implosion at the Milky Ways center  an event that would incinerate our solar system instantly.  

Utilizing a NASA online device known as the Solar System Simulator (), one can ascertain that no major planetary alignments are scheduled for 2012. In contrast, the closest projected planetary alignment is in 2040, twenty-eight years following doomsday. Second, even if the planets did alignment, where would this alignment end Despite the dire and fear-inducing claims about what lies at the galaxys center, many modern scientists will argue that no one really knows where the center of the galaxy resides. Even those who do offer an estimation insist that the galaxys center is not even in our solar systems orbit. Rather, our slar system rests comfortably above this particular orbital plane by roughly one hundred light years. And finally, what catastrophes would await if an alignment did somehow occur..nothing. Once again, many in the scientific community debunk the projected doomsday outcomes of the alignment theory. Planetary alignments, they say, actually occur an average of once every sixty years. An alignment event in 2000 brought about similar cataclysmic fears the event passed without incident. Such was the result in every recorded astronomical instance. As for the fear of a  dangerous gravitational force, scientists remind that the moons own strong gravitational pull would effectively cancel any outside influences. Further, the gravitational effects of any presumed black hole would only come into play if one rested at the very cusp of the dead star. Could reports of the Earths untimely demise be exaggerated.

Jatropha Biodiesel The Future of Energy

What are biofuels Biofuels are substances that are derived from natural sources such as plant matter and are used as a replacement for normal fuel with the intention of creating a source of fuel thats economically viable and environmentally friendly. The focus of this paper is the biofuel Biodiesel and its variant using the Jatropha curcas plant as an effective and economically viable source of fuel.

Biodesel
Jatropha curcas is a poisonous scrub weed of the euphorbia family originating in Central America. One of its most interesting characteristics is that its seeds have a high content of natural oils and when processed produce a naturally occurring fuel that can replace diesel fuel in cars. One of its selling points is that the plant requires little supervision after being planted, requires little water, no fertilizer, and can be planted on land deemed unsuitable for normal food crops however ideal for any other sort of farming venture. As such planting Jatropha does not cause any danger in a shortage of a countrys food supply since it doesnt need to have the same sort of land requirements that corn, sugar cane or soy have. Not only that other biofuel sources such as corn, sugar canes or soy beans which also serve another purpose such as being part of a country staple diet cause problems with the local economy since their use as a biofuel causes their prices to dramatically shoot up since more and more of the crop is devoted to the creation of biofuel and less towards feeding people. Jastropha is largely inedible and doesnt need the same type of soil, in fact it can grow just fine in inferior soil and thrive.

Simplified Version

Fuel Storage Methods
Biodiesel actually has poor oxidation stability hence there is a need for oxidation stabilizers to be added to the fuel mix in order to address the issue of long term storage. Not only that since biodiesel is actually organic in origin it does have a tendency to turn into a gel like substance at very low temperatures so there is a need to place proper insulation in the tanks. The tanks needed to store biodiesel must then be made of materials that dont oxidize easily and have proper insulation such as aluminum or steel tanks.

By using the online CO2 emission calculator of the website we will be able to see the difference between the C02 emissions of diesel and biodiesel using two examples to show the difference in the small to large scale. For the small scale the example will use two exactly identical cars with one using normal gasoline and the other using biodiesel. The control factors behind this example are the following

A.)The type of car used is a diesel lower medium car an example of which is the Ford Focus.
B.)The mileage of both cars will be 12,000 miles for the current year and a previous mileage of 10,000 miles for last year.
C.)1 car completely uses biodiesel while the other uses normal diesel.
D.)Both cars are less than 14 years old.
E.)And finally both cars are used by only one person.

Example First select the type of car you have from the list.
From your last and previous MOT certificates enter the recorded mileages into the table below. The table will subtract the previous from the last to give you last years mileage. Make a guess at your share of the cars use, and enter this in the table below, as a decimal. For example if you are the sole user of the car, enter your share as 1. If your share is 50, enter 0.5 for one third enter 0.33, for 25 enter 0.25, etc.

Large Scale
In the large scale example we will be using two examples 1 being a diesel power plant and the other being a biodiesel power plant with the calculations yet again courtesy of. The factors for the example are
2 Power plants 1 is a biodiesel power plant the other a diesel power plan and 1 millions liters of fuel used every day of the year.

In both examples you will notice that biodiesel is given half the conversion factor for C02 emissions this is due to the fact that while biodiesel does release almost the same amount of C02 into the air as regular diesel the fact remains that more Jatropha plants are planted to renew the resource which in turn reabsorbs the CO2 that was released. This is different from the normal diesel fuel burning cycle wherein as soon as the C02 is released into the air the mechanism needed to reabsorb the released C02 is left up to nature to deal with and not through human interference like the Jatropha plant cycle wherein humans take an active role in ensuring the sustainability of the resource by planting more of the said plant which naturally reabsorbs

Technology Gives History a Useful Limb An Analysis of Joel Tarrs Lecture

In his lecture Why Technology Joel Tarr looks back at the things that influenced and shaped his interest in contemporary technology. In doing so, Tarr also retraces the effects of technology on the industrial and environmental landscape of urban Jersey City. At the start of his lecture, Tarr recognizes the need to identify how we get to where we are now and what the turning points were. Tarr initially reveals that, as an urban environmentalist, he was astonished that he was awarded an honor that has everything to do with technology when in fact his academic background has nothing to do with it. This astonishment of his, however, leads to a discovery of the power of technology in changing societies.

At a young age, Tarr says that he has already experienced the fruits of technology, albeit far less complex than what technology is today. His environment at that time was already teeming with machines there was no escape. After going through all the academic pursuits as he went through in his adult life, he eventually found himself deeply engaged in several interdisciplinary research grants. From there, Tarr used his expertise in the social sciences, specifically in urban planning and history, in finding new ways to improve the different technologies that societies use. These include, among others, the waterway systems. It is this story of his life which he uses as a premise to say that history is not the only discipline worthy of our attention in overcoming new challenges. Indeed, history is not the only discipline where we can learn from in order to solve contemporary problems.

History, however, should not be relegated to the sidelines altogether. What Tarr presents in his lecture is the idea that the lessons history offers us can and should only be combined with what we are able to learn from other disciplines. Take for example Tarrs earlier research grant. He was previously teamed up with engineers, a few other urban planning specialists, andinterestinglya historian. The part where the roles of the engineers and urban planning specialists come in is easy enough to identify. As for the historian, however, the connections are not easily recognizable. This is because the study of history has been typically aligned with pure academics and not with any other practical application. In spite of this, Tarr suggests that the historian had a lot to offer, as in fact the historian did, in the process of their research. It was the historian who provided historical insights into what the citys waterway system was in the past. More importantly, it was the historian who gave a new perspective into what could have been a purely technological pursuit. This, Tarr recalls, was one of the reasons why their research turned out to be a huge success.

At a time when Jersey City was heavily industrializing, Tarr also recalls that the administrators back then realized the need to change the urban environmental landscape. The reason to this is because people in the city began to move out as the place grew more and more industrialized. Unless the imminent problem is resolved as soon as possible, Jersey City faces the threat of turning into a city of factories and all other sorts of structures with very few inhabitants. Understanding the need to address this concern, the city administrators developed ways to revitalize the city in terms of making the place more human-friendly, as it were, to the residents. One important aspect of the process is the involvement not only of those who were experts in the technologies needed but also of those who were very well aware of the history of the city. Apparently, Tarr brings into mind the idea that technological developments have strands of history embedded in them.

Thus, the answer to Tarrs questionwhy technologyrests on our understanding of the power of technology. In other words, the answer depends on our knowledge of what technology is capable of doing. In many ways, technology is capable of improving the society as in the case of Jersey City. To say that technology is a powerful tool in improving societies is to acknowledge the idea that there are many things in societies that need constant improvement as these societies continue to move forward. This is the part where history enters.

To understand the history of a city, for instance, is to realize not only its strengths but also its weaknesses. Apparently, Tarr recognized this principle in his lecture when he asserted that he tried to move away from Jersey City because it was transforming into something new yet something that he did not want as a resident. Tarr knew quite well that the place where he lived at the time was one that was morphing into an urban landscape more than before. While he understood the potential of the city to grow, he also frowned upon some of the disadvantages it brought with it. In other words, Tarr confirms the idea that history reveals both the good things and bad things about a place.

Technology endows society with the ability to correct itself. Put in another way, technology allows people to refine their society based on their understanding of their societys history. Certainly, there is no single way to handle the complexity of the challenge to improve upon societys weaknesses. Technology alone will not suffice to do the task. History alone will not suffice to do the same. Rather, it is the aggregate powers of history and technology which humanity can use to achieve the seemingly difficultif not impossible.

Why technology It is because technology gives history a useful limb. In more practical terms, urban planning specialists and engineers provide the operative side to some of historys compelling theories and discoveries. People may be able to discern the historical flaws of a citys urban planning but it will be a burdensome task to address these problems without the help of technology. A citys outdated and heavily problematic waterway system will remain a lesson in history unless people begin to realize that technology can provide the methods and facilities necessary to either correct the mistakes of the past or improve what has been done before. Several more examples can be said to raise this point. However, the more important thing to remember is that technology and history go hand-in-hand as neither one can fully transform a society for the better.

There is a far more interesting lesson to be learned from Joel Tarrs lecture. It is this the more people attempt to move away from history, the more they are drawn towards it. Tarrs life experiences are fine illustrations. If at all people tend to move away from history, when they return from that state where history seems to repeat itself, people will realize that what they have learned during the time when they have been away, so to speak, can only help them to retrace and improve upon the lessons that history taught them. Tarrs lecture leaves the mental imprint that history is not as useless as it first appears to be, only that through the aid of other disciplines can one fully appreciate the need to do what should have been done earlier. An awareness of the power of technology is also an awareness of the changes that history demands.

Scientific Inventions and Development

Scientific inventions and developments have been part and parcel of human history. It is argued that the two concepts are essential in helping mankind adapt to the environment. Generally, no human society has ever existed without the fundamentals of science. In modern times, science involves a logical thinking process and the application of mathematical concepts such as statistics. Scientific concepts based on this argument can therefore be traced alongside the ancient Greek civilization (Stewart, 2008). This paper will explore the reasons for the scientific inventions and developments and the resulting social implications of the same.

Reasons
The main reason for the scientific innovations and developments has been identified as a necessity. Man has always been inventive in the exploration of his environment in search for the best way to survive. Scientific undertakings have also aided man in explanation of nature. Long before religious mythology, ancient Greek philosophers and scientists sought to explain nature using natural explanations. Prior to this, phenomena were explained using the supernatural myth and tradition. Thales, Aristotle, Democritus, Hippocrates, Ptolemy, among others gave scientific explanations to natural occurrences (Stewart, 2008).

Impacts
Scientific inventions and developments can be regarded as the mother of technological advances being witnessed in the world today. The industrial Revolution that took place in Europe during mid 18th century all the way into the 19th century saw fundamental shift in agriculture, textile and mental manufacture, transportation, economic policies, and social structures in Europe (Montagna, 2010). On the positives, the industrial revolution brought about increased food supply, improved production, efficiency, and large profit margins due to increasing commerce. All these can be attributed to scientific innovations at the time (Montagna, 2010). Generally, scientific inventions have largely contributed to the technological advancement that is currently being witnessed in the world. Advancement in technology has affected all aspects of life including social, economical, political domains with both positive and negative consequences.

Socially, scientific inventions and developments can be said to have served double standards. Despite its good intention, science and technology has been identified as one source that has created social disparity in human society. For instance, the developed world boasts of high advancement in technological matters whereas the under-developed world is characterized by the poor technologies. Gender disparity has also been an issue when it comes to technological development as women in most developing countries are being sidelined as observed by Bonder in her research in Latin AmericaCaribbean region. She notes that though there is an upsurge in the number of women students and professionals in science and technology careers, they are concentrated in particular disciplines and fields of research, and they are not well represented at decision making level in scientific institutions. They were also accorded little incentives and support in accessing, using, and production of scientific and technological innovations (Bonder, 2004).

Conclusion
Scientific inventions and development are usually intended for the general well being of the human beings. Nevertheless, there arise challenges that come with these innovations. The human race has to move quickly in addressing these challenges some of which may be threatening the very fabrics of the society. It is therefore imperative to observe that as much as we embrace technology, we should give a second thought to the impending consequences. This will go a long way in helping the human society realize their developmental dreams with minimal negative repercussions of their inventions.

Scientific research and social difference

Technology can be rightly said to have brought desired changes in every occupation, in all developing societies. No society can claim to have developed without the benefits of technology. Technology application and the changes it brings are in fact intended and necessitated, as it is a part of the larger development of the society. Scientific research may well be described as the heart of our development and progress.

Developments in every sphere and the benefits it provides for mankind is directly or indirectly attributed to scientific research. Researches in science and technology have indeed made our lives better than that of our forefathers. Apart from improving the quality and comfort of our lives, it also contributes to providing a secure future. Scientific researches are driven by a sense of requirement or demand, which provides either direct or indirect benefit to the users. These inventions may be directed at improving the quality or quantity of the products or services. For instance the need to communicate anywhere across the globe instantly, has led to the development of the Internet. The evolution of a new disease or a health issue would see researchers working to solve it.

There is no doubt that all scientific research adds to our knowledge base, apart from development. Unfortunately, in some occasions such researches are only directed towards human curiosity rather than any other definite or beneficial purpose. The outcomes of such research, which may also be expensive, would have no bearing for mankind. Researchers associated with such needless research should realize that the resources wasted in their endeavor could be put to better use. Sometimes we cannot generalize all curiosity-based research to be valueless and undesired, as curiosity has relevance with the quality of human life.

People may perceive such research to be useful only in teaching it and, without any notable direct application. Take for instance the many theorems and formulae in math it is difficult for people to comprehend a direct use for these and would therefore presume it fit only for teaching. However, as science and all its domains are interrelated, any data would have its own use. It should be noted here that many of these math conceptions are the basis for the development of science and technology.

When the research is directed to a direct goal like cure for disease (NRC, 2002) or development of a gadget etc., we easily realize its uses and advantages, unlike when it is fully theoretical.  The relevance of social class and scientific research is very evident in medical research. Todays research and inventions are market driven, and therefore have high social relevance. Scientific research being primarily sponsored, the corporations sponsor research only when there is profit in them. Thus even unimportant products with have high market value would see rapid improvements compared to essential products with low market value. For instance the pharmaceutical companies prefer developing products that are focused on addressing baldness, impotency and beauty enhancement, to ensure profits. Although several developing countries have many tropical diseases affecting millions of people, such companies are not interested in addressing such markets because the people there are poor to afford them.

With time, scientific inventions have become closely associated with social class. The developed products and services kept improving in their efficiency and effectiveness and with that the costs of accessing these too rose. In due course of time the results of certain scientific research was beyond the reach of certain sections in every fields. For instance while many people can benefit from car fuel saving research, development of fashionable cars, was directed only to a select class. Thus scientific research has deviated from being universally applicable. The fact that affluent sections wanted products and services beyond the reach of the general public only saw researchers willing to work exclusively for this segment. Such research and development had been fundamental in the evolution of high end products exclusively catering for the affluent. Scientific research thus has high social relevance.