Logical and Physical Network Design

Priscilla, Oppenheimer. (2004). Top-down Network Design. Cisco Press

A second edition book in the series of technology networking from Cisco press. Its both a comprehensive and practical guide to help network developers design networks that are manageable, reliable and secure. The book is based on a network system analysis methodology to enhance an understanding of the network design process. The book outlines approaches for assessing the wellness of existing networks to promote performance measurement of new networks. It also provides solutions for complying with QoS requirements IP multicast, load controlled by IETF and guaranteed services, management traffic for ATMs, advanced switching, routing algorithms and queuing. It looks into the merits and demerits of various routing and switching protocols, IGRP, IEEE, 802.1Q, transparent bridging, OSPF, BGP4 and Inter switch Link. It specifically addresses network security, network design modularity, IPv6 and IPv4 dynamic, wireless networks, redundancy and new management and design tools.  The book is a valuable resource for networking professionals seeking to construct effective networks, develop successful careers and help them understand new technologies.

Gurdeep, S., Hura, Mukesh, Singhal  Mukesh Singhal. (2001). Data and Computer
Communications Networking and Internetworking. Crc Press

Due to complexity and the high numbers of standards and protocols for networking, this book meets all the informational needs of a network administrator or designer. It specifically addresses the fundamentals of logical and physical designs for networks and even goes further to differentiate between the two networks designs processes. Basics necessary for the creation of these networks are also illustrated at length. It illustrates network design fundamentals in clarity and thus acts as an excellent instructor for network design students and a resourceful material for practitioners. The book is a systematic work that satisfactorily answers numerous questions regarding design, network architecture, and deployment and protocol issues. Its a good reference for extensive treatment and practical for applied concepts. It also includes in-depth analysis of high speed, integrated digital networks and ATM switching among other evolving networking technologies. The book is an essential companion to networking and computer professionals and undergraduate students.

Jerry, D., Gibson. (1997).The Communications Handbook. C R C Press    

Its the second edition of the communications handbook and stands as definitive and detailed reference for use in the networking field. It covers all the fundamental theories of both logical and physical design. It discusses the elements of physical and logical design in general perspective and addresses the steps taken to develop a logical network design. It gives a guideline on how to approach the network designing process while seeking to distinguish between physical and logical networks. It addresses specialty areas from a deeper view having been competitively compiled and presented. It outlines a reliable balance of technical details, supporting material, international network communications and vital information. The book has 25 additional chapters competed with the first edition and includes more ideas on special features in the network designing process. It is a resourceful book for network engineers seeking to develop cutting edge logical and physical networks.  

David Groth  Toby Skandier. (2005). Network Study Guide (4th Edition). Alameda, CA
Sybex

The book describes network topology as physically interconnected elements via links and nodes. It states how this facilitates transmitting and receiving data from one point to the other. That a network topology is determined by mapping graphically the configurations between the nodes. Physical topologies includes mapping of the network nodes either into point to point, bus star, mesh and or tree. It discusses network topology as the physical interconnections of all network components like links, nodes, etc of a computer network giving detailed details of LAN as an example of a network since it exhibits both the physical topology and logical topology. In a nutshell the physical and logical topology of a network may or may not be identical. Its an important referencing point for network administrators and developers and students concentrating on networking.

ATIS committee PRQC. (2007). network topology. ATIS Telecom Glossary 2007. Alliance for
Telecommunications Industry Solutions. (Online). Retrieved on 15th February, 2010 from  HYPERLINK httpwww.atis.orgglossarydefinition.aspxid3516 httpwww.atis.orgglossarydefinition.aspxid3516.
Describe physical networks in existent and a logical network as a virtual arrangement of components in a network. Common types of network topologies are bus, hybrid, linear, mesh, ring, star and tree. It explains how the two topologies differ in terms of physical connections, transmission rates and types of signals. It distinguishes between logic and physical designs one being a layout and the other one being the form of data transmission.

Siamwalla, R., Sharma, R.,  S. Keshav (2000). Discovering Internet Topology. Cornell Network Research Group Department of Computer Science Cornell University. NY Ithaca.

Network topology is discussed in the book as the representation of all interconnections between directly connected points in a network. It is a vital resource material for the development of physical network topology as it clearly illustrates approaches to handle design problems.

Scott, M., Ogletree, W. T. (2003). Upgrading and repairing networks. (4th Ed.).Que Publishing

Illustrates how physical aspects of a network normally depend on the physical transport technology. Guides on choice of topology as there will be a number of LAN topologies from which to choose depending on which technology is used. There are legacy products that are currently no longer being used although there are other LAN technologies in use such as ARCnet. If there are no need of an internet connection these older protocols come in handy. TCPIP is the de facto standard from which the worldwide internet down to the LAN.

Designing a Network Topology. (2004) .Cisco Press  Priscilla Oppenheimer. (Online). Retrieved February 15, 2010, from  HYPERLINK httpwww.topdownbook.comchaptersChapter05.ppt www.topdownbook.comchaptersChapter05.ppt

It addresses the fundamentals of network design and administration. It distinguishes between a good and badly designed network. That is, when new additions cause minimal change and when troubleshooting is easy because there are no complex protocol interactions to wrap your brain around. It is an important material for instructors in network designing and also for network compliance auditing purposes.
Willard, Stephen. (2004). General Topology. Dover Publications

It illustrates topology as the networks fundamental layout clearly outlining ways of connecting computers. The book further explains the differences existing between physical and logical networks. The book explains concepts on which both the logical and physical networks are based on. It outlines considerations for how the different nodes ought to be connected through cables the signal strength required.
 
  Mueller, S.,  Ogletree, T. (2003). Network design strategies Planning and design components. Upgrading and repairing networks. (4th Ed.) .Que Publishing. (Online).  Retrieved February 14, 2010, from  HYPERLINK httpwww.informit.comarticlesarticle.aspxp101762seqNum2 httpwww.informit.comarticlesarticle.aspxp101762seqNum2

Outlines requirements for a good network design. It specifies the strategy  planning components for network designing. Its a reliable source for network designers. As part of the design plan it dictates that the following be considered  Documentation-can be a form of checklists for both complex and simple upgrades and not forgetting the training documents for both administrators and skilled end users (power users) Overall project plan- The project must be implemented in an orderly manner to achieve the goals set
Steve, Steinke. (2003).Network tutorial a complete introduction to networks. Network
Magazine. Focal Press

Explains logical network designs and represents IP structure of the networks. It further gives examples such as Class A, B or C address scheme while assessing the effectiveness of network topologies. Additionally, it addresses the topology designs that are normally used such as Ethernet, Fiber, ISDN, among others. Its suitable for network administrators and undergraduates students seeking to develop a career in the field of computer networking.

Khalid, Raza  Mark, Turner. (2002). Cisco Network Topology and Design. Cisco Press
Discuses the two fundamental concepts in networking. That is, logical and physical networks. It goes deep into the basic of network design. It is an essential book for network designers as it gives vivid examples of network topologies and the designing process. Compared to Willards General Topology, this book is far much resourceful to computer networking students since it gives the opportunity for articulation of class knowledge with real life situations presented by the numerous examples in the book.  

Hoffer, J. A., J. F. George, and J. S. Valacich (2008).  Modern Systems Analysis and Design
Fifth Edition.  Pearson Prentice Hall.

The book generally combines physical and logical design at a program level through Structure Charts and DFD. This is helpful for network designers either starting from scratch or doing an upgrade of an existing network of an organization. It provides sufficient information regarding the network components i.e. the hardware, protocols and the topologies. The network traffic, security requirements, future network expansion is put into account. The issue of disaster recovery, data recovery and troubleshooting techniques is also considered.

John, R., Freer  J., Freer. (1996). Computer communications and networks. Computer Systems Series. (2nd Ed.).Taylor  Francis

Introduces major computer networking concepts from a practical approach. It is most helpful to undergraduates or those in industrial courses relating to network design. This it does by offering all the basic knowledge necessary for an understanding of current methodologies in communication, standards and techniques in networking. It provides a guideline on decision making in the process of designing network architectures, LANs, WANs, network topologies, implementation and security. While retaining the original clarity of the first edition, this second edition is thoroughly revised to address the key concepts of computer networks.    
 
Dennis, A., B., H., Wixom,  R., M., Roth (2006).  Systems Analysis  Design Third Edition.
John Wiley  Sons.

It basically gives a guideline on concepts in network systems analysis and design. It thoroughly covers both the program and system levels of the physical and logic design charting using DFD as well as structure charts. It is a resource for the students of industrial network design courses and professional network designers. It is a step by step coach for analyzing the existing network system and successfully designing an appropriate new network.

Annotated Bibliography on Penetration Testing

Arkin, B., Stender, S.  McGraw, G. (2005). Software penetration testing. IEEE Security and
Privacy, 3(1), 84-87.

This article looks into the use of penetration testing as a tool utilized in quality   assurance and testing application software programs.  In every business organization, part of the quality assurance and testing involves checking the various software application programs used by the organization to make sure that they continue to meet the needs of the organization.  Quality assurance and testing would often involve the use of a series of different functional tests to ensure the proper implementation and working condition of the application software programs used by business organizations.  The information provided in this article will be used to present the value of the use of penetration testing towards the strengthening of the security of corporate networks of business organizations today.  At the same time, the information would also be used to present a strategy on how penetration testing will be used to strengthen corporate network security in business organizations.

Bavisi, S. (2009).  Penetration testing. In J. R. Vacca (Ed.), Computer and information
security handbook (pp. 369-82). Burlington, MA Morgan Kaufmann Publishers.

This chapter provides an overview on penetration testing.  It defines penetration testing as the method of exploiting potential vulnerabilities within a business organizations network and determines which vulnerabilities are exploitable and the degree of information exposure or network control that the organization could expect an attacker to achieve after successfully exploiting a vulnerability (p. 369).  This would be the working definition of penetration testing that would be utilized for this paper.  It also differentiates penetration testing from computer hacking and provides various strategies and methods that are used to conduct penetration testing to strengthen the network security of business organization, which are pertinent information that will also be used in the creation of the paper to be submitted.

Buzzard, K. (1999). Computer security  what should you spend your money on Computers
 Security, 18(4), 322-34.

The information presented in this article would be used to show why the use of penetration testing is important as a means of strengthening computer network systems in corporations.  Business organizations today now store, process and send large amounts of digital data through large and complex computer network systems.  Although these network systems may be considered as complex and up-to-date, the same cannot be said with regards to its security features.  Computer misuse has been on a constant rise, primarily due to the fact that it can be carried out without being detected, causing the interception and corruption of huge amounts of data that is being transmitted from one computer to another through the network, resulting the need for measures and methods such as penetration testing to ensure that security within these complex corporate computer network systems would keep sensitive and important data secure during transmission and receiving.

Cohen, F. (1997). Managing network security  part 9 penetration testing Network Security,
1997(8), 12-15.

In this article, corporate organizations today have been described to contain vast networks used to conduct daily organizational activities as well as to facilitate the decision making process of management personnel.  While much advancements have been done with regards to the technology of the equipment and networks that are used in such corporate organizations, the same does not hold true with regards to the security utilized in these networks to protect the information stored here.  As such, there is a need to address the limitations of network security within corporate networks.  Thus, the information that is presented in this article would be used to support and provide evidence for the need of the application of penetration testing to strengthen corporate network security in business organization.

Dautlich, M. (2004). Penetration testing  the legal implications. Computer Law  Security
Report, 20(1), 41-43.

This article presents information that would be used to present the limitations of the use of penetration testing as a means to strengthen corporate network security in business organization.  It presents the legal implications that may arise as a result of the use of penetration testing as a means to address vulnerabilities in the network security program utilized in business organization.  The article looks into three different statutes where the use of penetration testing may be found to be in violation of.  One of which is the Computer Misuse Act of 1990, which considers the incorporation of malware into any computer system to be illegal regardless of the permission provided by the organization.  The information provided here would be used to analyze the different methods and strategies used in penetration testing to analyze and present recommendations on how penetration testing may be done without violating this and other statutes presented in this article.

Hurley, C., Rogers, R., Thornton, F., Connelly, D.  Baker, B. (2006). Wardriving and
wireless penetration testing. Rockland, MA Syngress Publishing, Inc.

This book provides an overview of the different strategies and methods of conducting penetration testing among wireless computer network systems.  More and more business organizations are now utilizing wireless technology for their computer network systems.  Apart from providing a definition for penetration testing which can be used to supplement the working definition as mentioned in the article of Bavisi (2009), the book presents the different methods and strategies on how penetration testing can be conducted on computer network systems depending on the operating system utilized in a specific computer networks, speficially Windows, Linux and OS X.

Karyda, M., Mitrou, E.  Quirchmayr, G. (2006). A framework for outsourcing ISIT
security services. Information Management  Computer Security, 14(5), 402-15.

This article looks into the different technical, organizational and legal issues with regards the use of penetration testing to strengthen corporate computer network systems conducted by third-party ISIT security organizations.  Many small and medium-size companies avail of the services of third-party ISIT security organizations to ensure the security of the computer network systems that they utilize.  As a result, significant issues regarding to the security and privacy of the companys data have risen, particularly in cases such as the availing of ISIT security services from other countries, especially when penetration testing would be utilized as a means of determining the overall standing of the security and privacy of the computer network systems used by such companies.  Not only would this article provide information with regards to the limitations of penetration testing as a viable means to strengthen corporate computer network security.  It would also be used to support the potential of penetration testing to be considered in violation of certain statutes and laws presented by Dautlich (2004).

Lanz, J. (2003). Practical aspects of vulnerability assessment and penetration testing. The
RMA Journal, 85(5), 44-49.

There are a number of reasons as to why computer network systems among business organizations have become susceptible to individuals illegally accessing sensitive information and penetrating the network resulting to the corruption of the data that have been stored here.  Among these include the failure to manage the security of the computer network used in the organization, the improper configuration of the network system, and excessive trust and privileges provided by members of management.  Furthermore, it was determined in a report released by the FBI and SANS Institute that part of the reasons why many computer network systems are attacked is due to certain vulnerabilities within the operating system utilized for the entire computer network.  As a result, the use of penetration testing has been considered crucial in strengthening the security of corporate computer network systems since through this, management is able to conduct independent tests simulating various unauthorized access to the computer network system and addressing these weaknesses in order to prevent this from happening.  However, the potential of the use of penetration testing is limited by the ability of management to properly supervise such testing done by third-party IT professionals as well as the lack of a standardized set of guidelines for penetration testing to determine the effectiveness, or lack thereof, of the penetration testing done by external testers.

McFadzean, E., Ezingeard, J. N.  Birchall, D. (2007). Perception of risk and the strategic
impact of existing IT on information security strategy at board level. Online Information Review, 31(5), 622-60.

This article looks into the perception of senior management with regards to information security and how this impacts the overall security of the computer network system that is used within the organization.  While the need to undertake procedures such as penetration testing to ensure the security of computer network systems, there remains to be a lack of understanding on the part of senior management with regards to this aspect.  The information provided here in this article would be utilized to further support arguments that would be derived from those mentioned by Lanz (2003) with regards to the limitation of the use and subsequent effectiveness of penetration testing to strengthen computer network systems in business organizations.

Midian, P. (2002). Perspectives on penetration testing. Computer Fraud  Security, 2002(6),
15-17.

In this article, the author presents an overview with regards to the vulnerabilities of various computer software programs utilized in business organizations by presenting reasons as to why they occur in both code levels and system levels.  Some of the reasons determined for the presence of vulnerabilities in computer software programs include ubiquitous buffer overrun and the inability of the software program to handle error conditions.  From here, the article then presents on how through the use of penetration testing, such computer software program vulnerabilities can be found and addressed.  As such, the information on this article would provide additional reasons for the importance of the use of penetration testing as a means to strengthen computer network security in business organizations.  At the same time, the information presented here will also be used to present methods on how penetration testing can be used to strengthen corporate network security in business organizations.

Moyer, P. R. (1997). Enhanced firewall infrastructure testing methodology.  Network
Security, 1997(4), 9-15.

The onset of globalization within the business sector has further increased the demand for the need to heighten corporate network security, particularly when it comes to access to the Internet.  As a result, penetration testing towards corporate security networks has been considered to be crucial for many business organizations today.  Apart from providing supporting evidence with regards to the importance of the use of penetration testing among corporate computer networks, the article also presents a methodology that can be utilized for conducting penetration testing on the firewall infrastructure systems used in business organizations for upper management to evaluate the overall risk of their respective computer network systems towards malware and hacking that may occur by accessing the Internet.

Pfleeger, C. P., Pfleeger, S. L.  Theofanos, M. F. (1989). A methodology for penetration
testing.  Computers  Security, 8(7), 613-20.

This article presents a systematic approach in the use of penetration testing as a means to strengthen corporate network security in business organizations, which would be used as part of the methods to be presented in the paper to be submitted.  The approach presented in the article by the authors include a thorough analysis of the existing software system that a particular business organization is currently utilizing, the creation of a series of hypothesis of the possible flaws in the software system and how this would be remedied, and the subsequent testing to confirm or reject the proposed hypotheses.  Through the systematic approach presented in this article, network and information technology engineers would be able to identify which particular parts of the software system that must be secured as well as ensuring that such actions would not be considered as violation of different statutes and laws presented by Dautlich (2004).

Styles, M.  Tryfonas, T. (2009). Using penetration testing feedback to cultivate an
atmosphere of pro-active security amongst end-users. Information Management  Computer Security, 17(1), 44-52.

This article looks into the limitations of the utilization of penetration testing as a means of strengthening
corporate computer network systems.  Although penetration testing may be able to help in the strengthening corporate computer network systems through finding and addressing various weaknesses within the security of a particular computer network system, this would prove to be pointless unless the employees are made more aware and knowledgeable with regards to computer network security.  This means that employees must be made aware of their responsibility to ensure that the security within the computer network system would not be breached, which could compromise the security of the entire computer network system.  The procedures presented in this article based on the study conducted will also be utilized in the paper on how the limitation of penetration testing may be addressed and remedied.

Tryfonas, T., Sutherland, I.  Pompogiatzis, I. (2007). Employing penetration testing as an
audit methodology for the secure review of VoIP tests and examples. Internet Research, 17(1), 61-87.

Voice over Internet Protocol (VoIP) has become a part of the tools utilized by business organizations for conducting business activities.  Apart from this being incorporated to corporate computer network systems, its need for the Internet can make the use of VoIP at potential threat to the entire computer network system that can be exploited by hackers which could lead to the loss and corruption of sensitive and important data transmitted and stored by business organizations.  This makes VoIP one of the systems where penetration testing can be utilized in order to strengthen corporate computer network systems.  However, legal and ethical concerns may limit the utilization of the use of penetration testing to test its security.  This model presented in this article provides an action plan whereby penetration testing may be used to test the security of VoIP systems in corporations while still meeting legal and ethical parameters.

Yu, W. D., Radhakrishna, R. B., Pingali, S.  Kolluri, V. (2007). Modeling the
measurements of QoS requirements in web service systems.  Simulation, 83(1), 75-91.

This article presents a strategy on how penetration testing can be used in order to   strengthen corporate computer network security in business organizations.  With globalization now the trend in many business organizations, corporate computer network systems are now equipped with web services technology to meet the demands brought about by globalization among business organizations.  Here, a software model is presented which can be utilized in order to ensure the quality of service (QoS) of such corporate computer network systems by testing such web service systems.  The model provides various techniques with regards to the planning, design, implementation, deployment, operational and maintenance of the overall computer network system from which the process of penetration testing can be evaluated against to present as to it being a viable means to strengthen corporate computer network security among business organizations.
A. The DNA being the most important part of the cell is understandably found within the deepest recesses of the cell. DNA strands are found inside the nucleus  a membrane enclosed organelle inside the cell itself. The situation can be imagined as a separate closed vault (the nucleus) within the bank (the cell). To reach the DNA would involve getting through the cell membrane, across the cytoplasm and inside the nucleus. Viewing the task as a heist, the intruders would need to breach through two layers of security  the cell membrane and the nucleus. There is however one secret weapon that will eliminate both security walls  detergent (DNA Blueprint for Life).

To understand, we first look at the cell membrane. The cell membrane and the wall of the nucleus are both made up of phospholipid bilayers. The phospholipid bilayer is responsible for keeping the liquid insides of the cell separate and distinct from its liquid environment. It does this by sandwiching two phospholipids  molecules with a water attractive headand a water repellant tail. In the bilayer, the water attractive heads face outward on either side of the bilayer making the inside of the bilayer solely composed of the water repellant tails. This gives the bilayer its two key characteristics having an outward facing end which can interact with water and having an inside which repels water.

Together, these two properties enable the phospholipid bilayer to effectively keep the cell insides distinct from its liquid environment. Surfactants such as soap and detergent dissolve these phospholipid bilayers. Soap is also a phospholipid and its attractive head will interact with the outside face of the membranes bilayer. As such, the effectiveness of the cell membrane is destroyed and the cells insides will unravel. The same will also occur to the nucleus wall which is also  composed of the same phospholipid bilayer. Therefore, with the simple addition of soap, the DNA is already exposed open to the world for the taking.

B. The same method in A will not work for plant cells. This is because plant cells have a much tougher outer layer of security  the cell wall. Unlike the cell membrane, the cell wall is mostly composed of cellulose and other carbohydrates. Unlike the cell membrane, the cell wall is physically tough and is responsible for giving plants their rigidity. It is the molecule responsible for making trees hard. Very few chemicals in the world can break down cellulose making it a very tough wall to penetrate. Our thief may use such a chemical to break into the plant cell by dissolving the cellulose. Such a chemical can be the enzyme from the gut of termites  a creature which can readily eat and digest cellulose rich wood. After breaking through the cell wall, our thief may again use soap and other surfactants to dissolve the cell membrane which encloses the nucleus of the plant cell, releasing the plant DNA.

Wind Power

Wind power is one kind of a renewable energy that is generated from kinetic energy of moving air. After harnessing of this energy, it is transformed to other forms of energy like mechanical energy and electrical power. Wind power utilization has evolved for a long time. Man has known of existence of this type of power and has therefore used it to cruise boats. The initial use of wind power to drive windmills was applied in grinding and pumping water and was developed by Persians at around 500-900 AD.  The application of wind energy technology in the United States for water pumping opened up immeasurable areas to be used for ranching and farming.  The present technology has borrowed this principle and improvised it to make more efficient wind power generators. Today, people use wind power to generate clean and renewable energy for domestic and industrial use.

A basic wind power generator is composed of a rotor, generating system and a supporting tower. The rotor is made up of blades of which varies in sizes depending on the size of the generator. Laminated materials are commonly used for blade construction which is aimed at attaining high strength-to-weight ratios. Such materials are balsa wood, fiberglass, composites and carbon fiber. Blades are molded in a unique way to form air foils that can easily be lifted by slightest possible wind. These materials are blended with lightning arrestors to prevent them from being struck by lightning. These blades are joined at the axis to a hub from where they rotate freely. The longest blade available has a length of 61.5 meters.

The generator system is the engine that generates electricity. The rotor is joined to the system by a shaft that transmits the revolutions into a gearbox turbine. This turbine is synchronized to handle different wind speeds. Lubrication of the system prevents it from tear due to friction. This system is also installed using a system that helps it to turn and face the direction of the wind. It also equipped with a braking system that can stop the blades from turning to allow maintenance to take place. Present day systems have computerized systems which conditions and controls the power output. This also works as a remote controller and a monitoring device.  The tower on the other hand hoists the blade and the generator system high to allow them take maximum utility of winds. Present day towers can go up to 70 meters high.

Wind power works by utilizing the kinetic energy developed by the movement of air. The heat from sum rays heats the surface of earth at different intensities creating some patches to be warmer than others. As air gets warm, it becomes lighter and rises up. This creates patches of low pressure and the air from surrounding patches with high pressure moves in to equalize the pressure. As a result, a movement of air is created in the form of wind.  This wind is thereafter harnesses by strategically positioning a tall tower with flexible propellers facing the wind. When this wind blows past these propellers they are rotated and energy is generated. Energy generated depends on the number and the size of the propeller. Coastal regions are known to be best suited for the generation of this kind of energy. Other places like open plains, on the tops of hills and gaps between mountains where wind is strong and reliable also provide best locations for generation of this kind of energy. One of the basic requirements is a minimum wind blowing at an average speed of over 25kmh. application of different components assists in transforming this energy to other forms of energy. By use of wind turbines, one can transform this energy to electrical energy. Wind mills are used to harness mechanical energy, and wind pumps to pump water to a higher ground. Wind sails are also employed to propel light ships.

One of the greatest strength about use of wind power is that it occurs naturally and for this reason it is renewable. It has a basic advantage of being cheap and readily available. Wind is a free gift of nature. The only high cost incurred is capital investment cost required during its for installation. Compared to other sources of energy like coal, wind energy produces no waste products and has no green house effect. On another scale, installation equipment does not limit the use of land for other uses like agriculture. Some wind power generation points have ended up being as tourist attraction points hence generating revenue for local authorities. This can also form a good form of delocalized method of generating energy at point of need such as remote locations.

However wind power generation has not gone without some limitations. The unpredictability of in weather can sometimes lead to lack of wind. Some suitable locations can turn out to be expensive venture due to high cost of land. Noise pollution caused by swooshing sound from the turbines has been a concern when installations are located near residential areas. Other concerns have also been raised by environmentalists on the risks posed to birds populations.

Due to continued shift to long-term sources of renewable energy, wind power is gaining favorability as a reliable source cheap source of power for electrical and mechanical uses. This has also been aided by improvement in technology which has developed better generation techniques.  According to the Battelle Pacific northwest Laboratory which is a federal research laboratory of United Stases, wind energy can supply over 20 of the total nations energy demand. If well harnessed, wind energy has the capacity to meet all of the worlds energy demand. Increase in availability and use of wind power will reduce the dependence on fossil fuel and its price unpredictability.

In the last few years, U.S has increased its wind power installations by an average rate of 32 . Strong policy support, favorable economy and increased demand for energy have helped to achieve this growth. The growth of this industry due to perpetuated by the demand for clean and affordable energy has not been limited to the U.S. In the U.K, plans are underway to develop a wide and long-term wind power energy framework for domestic and commercial energy consumers. This is intended to acquire benefits from this venture which will on the other end earn the country long-term benefits.

One of the major drawbacks in the development and adoption of this technology has been the huge size of the wind generating equipment, transportation and experienced personnel for its installation. This is coupled by the increased global demand for steel for other industrial uses. In the U.S for example, components used for assembly of wind turbines are manufactured outside of this country. This adds up the cost of installation which can be a great discouragement to economies with shortage of capital.

Gloves in the funeral home environment

Gloves should be used for the appropriate purpose. Gloves are designed to protect against some chemicals. Funeral personnel encounter many chemicals as well as germs that may affect them. They require gloves that will protect them against bases, alcohols, dilute water solutions, ketones and aldehydes.

The best gloves for funeral service personnel are the latex gloves. They are made from natural rubber and have good physical properties and dexterity. Healthcare workers mostly use latex gloves and they have proved to be effective in preventing contamination with infectious diseases. They meet the specifications by OSHA. Cost factor should be considered when deciding the type of gloves to buy.

Some gloves are cheap others are medium priced while others are expensive. Latex gloves are cheap and this makes it convenient to most people with different financial status. Since people are not always prepared for funeral services, they require materials and services that are cost effective. Death is an accident and most families do not have the financial capacity to cater for all the expenses required.

However, these gloves become allergic to some people. Some people who constantly use latex gloves report allergic cases such as skin rashes, itching, nasal, asthma, hives, flushing and many others. This creates a barrier to their use by some people and cannot be used for a long period of time. Others cannot completely use latex gloves due to the health complications associated with their use. The government has established rules to control the manufacture of these gloves so as to reduce the allergic conditions related to their use.

Types of gloves
Latex gloves are made from natural rubber. They are cheap, have good physical properties and dexterity. However, they are poor when used against oils, greases, organics and the imported latex gloves may be of poor quality. They are used against alcohols and dilute water solutions. Natural rubber blends gloves are low in cost and have better resistance to chemicals compared to latex gloves. However, they have inferior physical properties compared to natural rubber. Vinyl gloves have low cost and have good physical properties and medium chemical resistance. Imported vinyl gloves may be of poor quality. They are used against strong acids and bases, salts, alcohols and water solutions. Nitrile gloves have low cost, good physical properties, and have long service life. However, they are poor against benzene, methlylene chloride and many ketones. They are best used against oils, greases, aliphatic chemicals, xylene and trichloroethane. Norfoil gloves are excellent in chemical resistance but have poor fit, easily punctured and have poor grip and are stiff. They are best when used for Hazmat work. Vitom gloves have organic solvents but they are extremely expensive and have poor physical properties. Butyl gloves are specialty gloves that are polar organics. However, they are expensive, poor against hydrocarbons and chlorinated solvents. They are used against glycol ethers, ketones and esters.

Conclusion
The funeral personnel require gloves that will protect them from harmful chemicals that they may touch while on duty. The appropriate gloves should be used for the appropriate work so as to avoid contamination. The cost factor should be considered while choosing the best gloves for particular occasions. Some people are allergic to some types of gloves. These individuals should select the gloves that do not cause reactions to their bodies.

The Formation of Precipitation

On a given day in the winter in St. Louis, Missouri it began snowing in the morning, the snow changed to rain by noon, later in the afternoon sleet was falling which turned to snow, and by evening rain was freezing on bridges and roadways. Briefly explain how such a sequence of precipitation might occur.
 Changes in the forms of precipitations are likewise dependent on different factors including the climate or weather, temperature, air saturation and geographical location among others. The 2 mechanism involved in this process are called the Bergeron process (precipitation from cold clouds along the middle latitude) and collisioncoalescence (warm-cloud process mostly associated with the tropics).

The 2 general types of precipitation are the liquid and frozen form. Liquid precipitation for example can be in the form of rain and drizzle while frozen one includes snow, sleets, ice pellets and hailstones. Precipitations vary in their droplet size or diameter as well as hardness upon impact and colors ranging from opaque white to transparent and translucent.

In the given situation above, the first form of precipitation that occurred is in the form of snow. Snow occurs during winter times as there is cold or near freezing temperature which causes precipitation to solidify and form crystals like snows. Snow in the morning occurs because the temperature is cold enough for snow to form at winter time.  By noontime, precipitation changes to rain which is a liquid type. This happened because the heat of the sun at noontime caused the snow to melt and change into rain. By the afternoon the precipitation has changed to sleet which is known as a partially melted snow or mixture of rain and snow. We can expect this to occur in the afternoon as temperature gradually starts to get cooler. This may include rain that gradually freezes and small amounts of snow. At evening as the sun is out, the cold air started to freeze the snows and rain along bridges and roadway.

Earth Science

Albedo is generally defined as the ratio of diffusely reflected to incident electromagnetic radiation and another term for reflectivity. It is unitless measure of an objects or surfaces reflectivity. The surface of the Earth has an average albedo of 0.3. Given that everything else being equal, a place with higher albedo would tend to be cooler than a place with lower albedo.

Rural areas, where the surface is covered by trees and other vegetations, generally have higher albedo than urban areas. Consequently, the temperature of the rural areas is expected to be lower than the temperature of the urban areas, assuming that any else being equal. The phrase anything else being equal is very important in comparing these two surfaces and any other surfaces. This is due to the fact that albedo is not the only parameter that contributes to temperature of a surface. The emittance of the surface is another factor. An object will radiate more energy than another object of the same surface area if its emittance is larger. Losing more energy would lower its temperature faster, a phenomenon that reduces warming.

Another factor would be the amount of heat from the energy budget goes to latent heat flux. Latent heat flux is the heat used to evaporate water. The heat used in this heat flux does not contribute to the heating of the air. This means that if the latent heat flux is increased, the energy that would be spending to heating would be lessened. This is another reason that the rural is cooler than the urban. Rural areas covered by trees or any other vegetations are kept cool by evapotranspirating the water that these plants absorb. These plants act as an evaporative cooler. Considering that 98 or more of the water absorbed by plants are evapotranspirated, the effect would be an appreciable decrease in temperature.

1983 Richard E. Clark

The beginning of the 21st century has been marked with the growing commitment to the use of technology in everyday life and the role of technology in curriculum and instructional design has become increasingly important. Throughout the history of instructional design, the impact of technology on student achievement has been the source of the continuous professional debate. In 1983, Clark wrote the best current evidence is that media are mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition. Basically, the choice of vehicle might influence the cost or extent of distributing instruction, but only the content of the vehicle can influence achievement (p. 445). That means that, according to Clark (1983), what matters most is the curriculum content but not the technology, which is expected to drive it.

Looking back to the times when Clark (1983) wrote his book, it is more than clear than the then state of technologies did not leave curriculum designers any chance to fully appreciate their instructional potential. However, from the viewpoint of the 21st century, technologies are no longer the vehicles but are the direct sources of significant influence on student achievement. As a professional, I no longer imagine effective instruction without technologies. I view technologies as the most promising element of any successful curriculum. My expectations and beliefs in technology are supported by a whole range of studies and researches for example, Harwood and McMahon (1997) confirm the direct correlation between the use of video media in high school chemistry course and student achievement. That, however, does not mean that the content of the curriculum itself is no longer relevant rather, both the content and the vehicle can be fairly regarded as the two contributing factors to student achievement.

From my experience, technology, like any other instructional vehicle, requires that students are prepared to use it in learning. I am confident that the effects that technology produces on student achievement will largely depend on how well this technology is integrated in the basic curriculum and what technology skills students possess. However, it is at least incorrect to limit the role of technology to a curriculum vehicle the 21st century creates almost unlimited opportunities for using technology as the basic driver of positive student advancement in all disciplines.

Diffusion of Illusion

Whether Rogers Diffusion of Innovations theory has become a revolution in the process of reconsidering the role of innovations in social life is difficult to decide, but it is clear that the theory creates a general picture of how innovations work and can work for the benefit of the social development. According to Rogers (1983), diffusion is the process by which an innovation is communicated through certain channels over time among the members of a social system (p. 5). Because innovations are neither authoritative nor collective, every individual is bound to pass a unique process of innovation-decision that comprises several essential stages knowledge, persuasion, decision, implementation, and confirmation.

What makes sense is that the members of social system are inherently interdependent, and thus, the quality of their innovation-decisions depends on the quality and direction of innovation-decisions made by other social system members. That means that some members will be riskier in their decision to adopt innovations, and the rest of society will seek to follow their example, as soon as the beneficial character of these innovations is confirmed. This is exactly how innovation-decisions work in education and public schooling while some schools and instructional designers set the stage for using innovations, others readily adopt the same innovative approaches as soon as they can see their positive effects on education. However, in the context of education, innovation-decisions of other social members alone cannot promote successful implementation of technologies according to Sahin (2006), diffusion of innovations is being driven by social, organizational, and personal variables social variables comprise friends, peers, and faculty members decisions about innovations organizational variables include physical resource support and university mandates, while personal factors imply personal interest in instructional technology, in using innovations to improve teaching, in enhancing instructional technology, etc. (Sahin, 2006). All these factors equally contribute to the development of innovation-decisions in education and can successfully expand the pool of those, who are willing to become the primary instructional innovators.

American Welding Society Puget Sound Scholarship

University of Puget Sound is a university that Ive looked forward to attending for a long time. With its interdisciplinary programs and the unique aspect of a liberal arts focused degree in welding, it has easily become a university of my choice. However, financial constraints are becoming a crucial barrier in my aspirations to acquire an undergraduate degree from this university.

Welding has been my long-term passion since the time I was acquainted with the subject of engineering. It has been my desire to seek admission in a prestigious university where I will be allowed to study welding in a rigorous manner. Applying for the American Welding Society Scholarship is my attempt to be able to attain an education that would otherwise not be affordable by me. Additionally, I also feel that I have the required credentials to deserve one. My transcripts from high school that are attached bear testimony to my commitment to result and achieving my goals.

I have extensive extra curricular activities that reflect upon my interest in the liberal arts. I find the concept of entwining liberal arts with a technical expertise in welding a perfect combination as it would give me the opportunity to pursue my passion for arts while having a stable career in welding. I am an ardent believer that if one does what he she loves, then there is no way that he she will not succeed. And it is with strong conviction in this belief that I am opting for this program to build a brighter future for myself.

I plan to market myself in the welding industry after completing my undergraduate degree and then after gaining sufficient experience I plan work in the industry and putting into practice my skills. Having struggled in the past few years to make financial ends meet, I have developed an entrepreneurial streak within me. But I am also resolute that I should acquire the relevant knowledge, skills and experience before starting up something myself so that Im sure that I will be prepared for anything that comes my way.

With its rich history dating back to the First World War, American Welding Society has been taking a number of initiatives to assist the youth in attaining brighter futures. And it is therefore my belief that in every fairness I am an ideal candidate for this scholarship in terms of need as well as merit.

With every passing year, the competition in employment is increasing, and the number of people to be employed is increasing at a faster rate than the employment opportunities. Therefore I wish to safe guard my future by achieving a scholarship and pursuing my professional education in a field that I feel passionate about.

Thus by pursuing the undergraduate degree at the University of Puget Sound, I hope to attain good quality education, an exposure to the many fields available, and the knowledge and ability to excel in the line of profession I choose. I hope that in years to come I can also avail all the opportunities and do justice to my chosen field. But for all this to materialize and turn into a reality, I am highly hopeful of attaining the American Welding Society Scholarship.

STATEMENT OF RESEARCH INTEREST

My name is Anwar Ahmed Anees Hammad. At present I am serving as Industrial Engineer.

I did my B.Sc. in Industrial Engineering with GPA 3.92 from King Abdul Aziz University Jeddah, KSA in 2006. I did my Masters in Engineering recently from McMaster University Hamilton, Canada with specialization in Entrepreneurship and Innovation.

I have excellent proficiency in a number of languages. My computing skill is also of advance level.

I do possess skills like creativity, working under pressure, working in teams, decision making abilities and ability to learn new skills.
     
I have gone through a number of training programs which include introduction to industries, industrial material handling, language skills, strategic planning and implementation, management for non-managers, writing skill etc.

I have served as supply planning manager and production line supervisor in industrial enterprises.

I am interested to get admission in Environmental Applied Science and management my academic background, training experiences, my potentialities and work experience all will be helpful to perform well in my studies in the field of my interest. That is why I am seeking admission in Environmental Applied Science and management.

Furthermore my academic background and work experience can be helpful in understanding how industries making our environment polluted and what remedial measures will be required by industries to minimize the menace of industrial pollution.
     
I am interested to carry out my research projects at master level and PhD level in waste management and water treatment and how entrepreneurship influence decision making in these areas. The research field I am interested in is related to my academic studies and work experience.

My academic background and work experience, as outlined above, are relavant to the field of study I am going to pursue in future.

My career objectives are to seek a job in any environmental protection agency on a responsible position after doing my PhD in Environmental Applied Science and management. I would like to work to mitigate industrial pollutants like industrial waste and contaminated water. My industrial engineering degree and the degree I am going to seek can go a long way to do my job well. In this way I would be able to contribute in creating healthy environment for the humanity.

My knowledge, abilities, potential and skills would be utilized for the betterment of the humanity as we see that industries all over the world have been making the life miserable for people for dumping their refuse in air and water without treating it properly. It is the need of the hour to do what one can do to make our air and water pollutants free. Moreover, work in this field will also be helpful to protect our biodiversity and aquatic life.
Human genetics refers to the inheritance study occurring in human beings. It encompasses various overlapping field of studies such as cytogenetics, classical genetics, molecular genetics, genomics, developmental genetics, biochemical genetics, genetic counseling, population genetics and clinical genetics. Most of the human inherited traits are influenced by the genes. Questions relating to human nature, having knowledge on human diseases and coming up with effective treatment for such diseases as well as understanding the genetic component of human life can be accomplished via human genetics study. Today, when we think of human genetics, what first comes to our mind are the physical characteristics of an individual inherited from the parents. However, there is more to human genetics. Genes have also been found to be responsible for various diseases today. Human genetics encompasses more than just genes inheritance (Lewis, pp 56).

Human genetics
Human genetics is more concerned with genes of individuals. When the term human genetics is mentioned, we all think about the inherited gene contents that determine the traits of an individual. To understand what genes are, one has to know their location and component. Genes are located in the chromosomes.

Chromosomes contain the ingredients that make up a living thing. Chromosomes are present in almost all cells of a living thing and they are located in their nucleus. They are made up of deoxyribonucleic acid commonly known as DNA and they are in form of strands. Some segments of these strands of DNA are known as genes. Each gene has a different form of protein and as commonly known, proteins are essential in building, maintaining and regulating the body. Proteins are vital during bone formation, controlling digestion, enabling the movement of muscles and in keeping the heart beat. Most of the body cells have 46 chromosomes with the egg and sperm cells containing 23 chromosomes each. Following the union of an egg and a sperm, the fetus that results after this union inherits equal recipes of DNA from its parents, that is, the mother and the father. Out of the 46 chromosomes inherited by the fetus, only two determines a persons sex. A boy inherits Y chromosome from the father and an X chromosome from the mother while a girl inherits one X chromosome from the mother and another X chromosome from the father.

The history of human genetics dates back in the 19th century. Gregor Mendel, a Czech monk was the first individual who argued that human traits were passed on or inherited across the generation. Mendel studied the inheritance of traits in pea plants such as smoothness and color and realized that such traits were being passed on from the parent following a particular pattern. However, Mendels ideas were only expounded by other scientists in the 20th century. Genes exists in different forms known as the calledalleles. A gene in charge of hair color determination for example may have various forms known as alleles auburn hair, black hair, blond hair or red hair. From the two genes one inherits from his or her parents, one of the two genes is much stronger than the other respectively known as dominant and recessive genes. The dominant gene determines the physical traits or characteristics of an individual and it is outwardly expressed in a living organism. For a recessive kind of trait to be exhibited, two recessive alleles must combine. In this case, the outcome may be different from the parents or the origin of the genes. Combination of two recessive genes usually leads to genetic disorders manifested in various forms today such as albinism.

Many of the genetically related disorders being noted today are as a result of changes in genes or alteration of genetic codes. In some instances, genes may be deleted while in other cases, they may be located in the wrong areas in a chromosome. They may also be swapped between other chromosomes. Due to these alterations, genes fail to work or end up working in the wrong parts of the body leading to genetic disorders. Mutation is another way in which the genetic code can be altered. Mutation of genes can have various implications they can prevent some protein parts from being made, can lead to substitution of amino acids, can delete some parts of the message thus shortening the gene, or they can make messages or proteins begin at the wrongful place. All these mutations manifest themselves as disorders. Some disorders are mild while others can be life threatening.

Future of human genetics
 So far, scientists and specialists dealing with human genetics have been able to identify and relate most of the birth defects and disorders to genetic variation. The specialists have also been able to identify the various mutations that cause disorders which have helped them come up with preventative measures for individuals believed to be at high risks of passing a disorder to their offspring. Treatments to some genetic disorders have been manufactured. With such tremendous breakthrough in human genetics, I believe that this field of medicine will further expand in the future. My vision for human genetics is to see scientists in this field being able to identify all genetic disorders and to come up not only with the cure for them but also preventative medicines for any individual feared to be at risk of such disorders. In the future, human genetic scientists should also be in a position to develop an immunization medicine to protect all individuals from genetic diseases. When this happens, medicine field will have discovered cures to almost all diseases including cancer, diabetes and other terminal illnesses.

ICT regulation

There are various legislatures in the United Kingdom that directly affects the use of information communication technology in government agencies as well as the private sector. These legislations were enacted by the parliament to protect the public and organization from the harmful effects of information communication technology. Some of these legislations include the Data Protection Act of 1998, Computer Misuse Act of 1990 and Health and Safety at Work Act of 1974. There are other acts which govern the use of ICT and are very relevant in the public sector and government agencies but these three are the most important.

The Data Protection Act of 1998 was enacted by the United Kingdom Parliament and defines the criteria in which information about individuals should be treated and used legally. The main function of the act is to protect the citizen from abuse through illegal use of information about them. The act provides individuals with mechanisms in which they can safeguard information about themselves. The act defines the basic principle which any government agency that uses personal information should always adhere to. For example, the act requires that the information should not be used for any other purpose rather than the one it was collected for and should not be kept for unnecessary long time. The individual must also be allowed to access the information and it should only be available to authorized persons. This law is applicable in the European Union where different government agencies can exchange individuals personal information within the block. However, personal information held by government agencies can only be sent outside the EU on special conditions specified by the act.

Another act that affects the use of ICT in government agencies is The Computer Misuse Act which was enacted in 1990. This act makes some activities on computers, such as hacking personal systems, misuse of software and gaining access to personal files illegally. Computer Misuse Act specifies activities which should be considered illegal and punishable by the law. It is therefore illegal for a government agency to access personal information from a computer or modify the personal material without the authority of the owner. Individuals are also forbidden by this act from accessing government agencies information with intent, modifying the information or unauthorized access of secured information.

The Health and Safety at Work Act of 1974 is also very applicable in the use of ICT in government agencies and institutions. The act was originally designed to ensure that the employees as well as the employers are safe at the workplace. The act requires government agencies to provide safe ICT equipment and systems to their workers, provide user instructions and training for new equipments and protective equipments where necessary. On the other hand, the employee should also take responsibility and take care of other workers and themselves too. Workers should ensure their safety in the use of dangerous equipments by avoiding misuse and reporting faultiness in time. The healthy and safety at work act of 1974 which is used in the United Kingdom today is based on the European Union legislature.

Apart from these regulations, the European Union has a set of directives that are followed by member states. The EU requires a record of risk assessment of all potential risks associated with activities of any institution. The EU legislation pays more attention on the security of the people involved in any government or private agency dealing with ICT.

Hydrogen Cars

Throughout the past decades, humans destruction to the planet Earth has become a very alarming problem. The buildings people live in, the food they consume, and the insubstantial luxuries people exploit are creating a detrimental consequence on the very planet they heavily depend on. Because of these disturbing facts, governments all over the world have been formulating innovative strategies to minimize the current destructions to the environment. One of the most promising approaches being encouraged and continually developed by many countries today are hydrogen cars. Many nations view hydrogen as an excellent alternative energy medium for vehicles because it is the most abundant of all chemical elements in the planet and does not emit harmful chemicals into the environment. Without a doubt, when hydrogen powered cars become the accepted mode of transportation, people will certainly reduce their reliance on fossil fuels, accomplish lower prices at the fuel pumps, and cut back on the greenhouse gases that cause climate change.

Hydrogen Car Industry
In recent years, major automaker companies are continually working together to further develop even more practical hydrogen-fueled cars. In fact, many of these companies have already released their versions of hydrogen-powered cars. Honda, for instance, has already showcased a few lines of hydrogen cars and has announced some plans to expand its line of passenger cars in the near future. Likewise, Mercedes is expected to start this year a small-scale production of hydrogen cars. More significantly, a number of gasoline stations are getting on board by planning to supply hydrogen fuels along with gasoline at their pumps so that driving and owning these cars become more convenient for the general public. However, many experts still believe that additional research must be done before hydrogen cars become a common sight on all of the highways in the world.

Although hydrogen cars is now readily available in the market place, many automaker companies still believe that mass production of hydrogen vehicles will not yet take place during this new decade. Currently, hydrogen cars are apparently very expensive, but these carmakers believe that by the start of 2020, the prices of these cars will go down dramatically. California Fuel Cell Partnership predicts that between 2012 and 2020, mass production of fuel cells and internal combustion hydrogen cars will significantly take-off (Llanos, 2004). They further believe that as the volume of their production increases, the production cost would go down. Accordingly, if 100,000 hydrogen cars are produced in the future, the prices of these cars will expectedly go down to as much as 20,000 to 25,000 per car (Llanos, 2004).

Hydrogen Fuel Routes
There are two methods wherein hydrogen can be used to power vehiclesindirectly through fuel cells, or directly into a converted internal combustion engine. Small car companies like the Hydrogen Car Company and Robinsons Company are more focus on producing hydrogen internal combustion engine given that most of their cars are primarily designed to run on biodiesel and only secondarily on hydrogen fuels as a backup source. During combustion, the hydrogen combines with oxygen, which eventually generates energy that is capable of powering the vehicle. In 2000, BMW successfully demonstrated this technology by publicly carrying people in a fleet of 15 sedans powered by hydrogen internal combustion engines across a particular neighborhood in Germany (Australian Academy of Science, 2001).

Fuel cells, on the other hand, are like batteries that are capable of triggering an electrochemical reaction between oxygen and hydrogen that eventually turns to electricity. However, unlike regular batteries which accumulate electricity, hydrogen fuel cells produce electricity as the car moves. Major carmakers like Anuvu, Honda, etc. are more inclined to use fuel cells in view of the fact that they are cleaner than the internal combustion engine route, which burns fuel in the engine. Moreover, fuel cell engine is more efficient on hydrogen than internal combustion engine. The same amount of hydrogen can power fuel cell vehicles at least twice as longer than a modified internal combustion engine (Australian Academy of Science, 2001).

Costs of Having Hydrogen Cars
At present, the price of hydrogen fuels being sold by industrial gas suppliers roughly cost 10 per kilo. However, hydrogen fuels produce two to three times mileage when compared to a gallon of gasoline. By 2015, the Department of Energy is aiming to lessen the cost of hydrogen fuels at around 2 to 3 per kilo (Love to Know, 2009). In terms of engine conversion, the current cost is likewise somewhat expensive. Most conversion for Hummer cars start at 60,000, pickup trucks at 99,995, and cost of Shelby Cobras run about 149,000 (Llanos, 2004). For Ford trucks, vans and other luxury SUVs, the cost of hydrogen internal combustion engine conversion normally ranges between 30,000 and 80,000, while fuel cells engine conversion for pickups and vans can costs between 99,995 and 149,995 (Llanos, 2004). Considering the environmental and health benefits obtained in using hydrogen cars, the aforesaid costs are unquestionably a very small price to pay.

Hydrogen Cars vs. Other Hybrid Cars
Unlike many of the green cars and other hybrid cars available on the market today, hydrogen cars are the only type of cars that present the assurance of zero-emission technology (Hydrogen Cars Now, 2009). Obviously, not like fossil-fuel burning cars that emit different sorts of pollutants such as nitrous oxide, carbon monoxide, carbon dioxide, and microscopic and ozone particulate matter, the only byproduct from hydrogen cars is water vapor (Hydrogen Cars Now, 2009). Although other green and hybrid cars have addressed the concerns of greenhouse gases emissions, only hydrogen cars guarantee zero emission of noxious wastes. According to Environmental Protection Agency (EPA), the conversion from fossil fuel powered cars to hydrogen powered cars would eliminate more than a billion tons of greenhouse gases into the environment every year (as cited in Hydrogen Cars Now, 2009).

United States
There are already a number of hydrogen cars running on the road today. In European Union, Japan, and California, several hydrogen cars are being used as fleet vehicles. In the United States, an increasing number of car companies are now selling hydrogen vehicles. Industrial gas dealers are likewise selling hydrogen in cylinders that range from 1 to 20 per kilo to facilitate the fuel needs of these environment-friendly vehicles (Llanos, 2004). However, as of the moment, there are only a few hydrogen filling stations operating across the globe. In fact, California, which has the most hydrogen stations in the United States, currently has only 13 stations, although it is planning to establish 170 more stations by the end of 2010 (Llanos, 2004). Moreover, the government is still carefully sorting out the most excellent ways to store, distribute, and produce hydrogen cars.

Future of Hydrogen Cars
While other types of alternative fuels can be stored, trucked, and piped in the existing system for gasoline, the nature of hydrogen will necessitate a whole new fuel distribution infrastructure (California Fuel Cell Partnership, 2009). As a result, consumer distribution system as of the moment is still not in place. Moreover, the durability and expensive cost of hydrogen cars, the cars incapacity to amass large amounts of hydrogen fuel, and the lack of a carbon-free method of generating the hydrogen are making the widespread availability hydrogen cars even more unattainable. Accordingly, it is expected that hydrogen cars will not make a significant impact on petroleum use, carbon emissions, or greenhouse gas emissions within the next decade because of the absence of the aforesaid indispensable requirements for hydrogen cars to operate across-the-board. Nonetheless, despite these current inadequacies, many experts are still hopeful that in the near future, hydrogen operated vehicles will become a full-fledged transportation system all over the world (California Fuel Cell Partnership, 2009).

Conclusion
Technologically and ecologically, hydrogen is the most sensible fuel right now for automobiles. However, hydrogen fuel does not go off free in nature it must be manufactured through fuel cells and converted internal combustion engines. In view of this, experts believe that the mass production of these cars will only start within the next decade. In fact, as of the moment, automakers are still carefully sorting out the most excellent ways to store, distribute, and produce hydrogen cars. Nevertheless, automakers are continually pushing for the development of hydrogen cars because they believe that unless society deviates from current dependence on fossil fueled cars, the problems associated with fossil fuels will certainly not be eliminated. When hydrogen powered cars become the status quo, people will certainly reduce their reliance on fossil fuels, accomplish lower prices at the fuel pumps, and be able to decrease the release of greenhouse gases that are causing climate change.