Identification of Streptococcus canis isolated from milk of dairy cows with subclinical mastitis

Analysis
Scientists performed the study in order to clarify the nature of infection and the dominant causative pathogens for subclinical mastitis that had widely affected cows in a dairy farm in north Rhine-Westphalia, Germany. The hypotheses under investigation was first, subclinical mastitis that is caused by Streptococcus canis is very rare and second, polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) and PCR gene amplification methods can accurately identify species specific characteristics of Lancefield serogroup G of Streptococcus canis  bacteria.

The study involved collecting milk samples from the affected lactating cows and subjecting them to laboratory analysis. The major techniques involved phenotypic and genotypic characterization of the bacteria present in the samples. For a start, a direct microscopic count was performed to determine the somatic cells count in the sample and this was followed by culturing and sub culturing methods on various agar plates to identify the various groups of bacteria present. The agar preparations distinguished Streptococcus from Enterococcus and Staphylococcus species all of which are present in cases of subclinical mastitis. The Streptococcus canis were further subjected to biochemical and phenotypic characterization tests.  The biochemical tests revealed sensitivity and resistance to various antibiotics, and hydrolysis activities on various carbohydrates. The PCR amplification methods further revealed the phenotypic and molecular characteristics of the S. canis. First, genomic DNA extraction was performed followed by PCR amplification of the encoded 16S ribosomal RNA segment using oligonucleotide primers then the fragments restricted by RFLP enzymes to generate species specific patterns. The necessary primers were used in amplification of species specific genes and this was in line with the universal data and DNA software in gene databases. A pulsed field gel electrophoresis was carried out for macrorestriction analysis of chromosomal DNA and this revealed that the bacterial species forming the group S. canis, were closely related. The results showed that S. canis was the predominant group of species that was identified in most of the case samples and therefore this disapproved the first hypothesis that S. canis is very rare in subclinical mastitis. However, it was noticed that this case is the largest subclinical mastitis outbreak caused by S. canis so far and this can be attributed to the high contagious ability of these species. All the same, the use of PCR amplification and RFLP analysis has been shown to accurately identify species-specific characteristics to almost 99 percent, therefore, agreeing with the second hypothesis.

The results from this study are of significance importance. The revelation of the genetic methods for species identification could point out that wrong diagnoses were being made in the past over the causative agent of subclinical mastitis, and that is why there was the speculation that S. canis is rare in subclinical mastitis. Exclusive phenotypic characterization could have limitations especially where species exhibit similar features to one another. Despite the time and high costs, future research should consider including genetic markers for all microorganisms especially the pathogenic ones to aid in accurate diagnosis of prevailing conditions. Also, the revelation that S. canis is sensitive to antibiotics such as penicillin G and amoxicillin-clavulanate can be of a great potential towards manufacture of antibiotics or vaccines that can combat the loss causing subclinical mastitis.

This article was an interesting read especially on the fact that it was very educative. The steps taken in the research are well sequenced making it easier to follow and understand. I chose this article because the research was based on a realistic case study. Diseases caused by Streptococcus are very common and this article can make a great reference for many clinical studies.

Cardiovascular Fitness Testing through Submaximal and Maximal Testing

Introduction
Submaximal testing and maximal testing are two test procedures used to determine the maximum aerobic capacity of any individual. The variables used in these experiments include heart rate, height, weight, age, and gender following both protocols. The subjects include a 29-year old male and another 32-year old male. They both completed separate exercises to determine heart rates at different exercise limits and VO2max.

The purpose of this experiment was to determine the cardiovascular fitness through the use of the Astrand-Rhyming bike and Bruce treadmill.

Materials and Methods
This experiment was conducted by using two different test machines the Astrand-Rhyming bike and Bruce treadmill.

For the Astrand-Rhyming bike test, a heart rate monitor was connected to the skin of the subjects using contact gel. The subject is allowed to sit on the seat of the bike and told to begin cycling. At the end of each minute, the subjects heart rate is recorded. One of the precautions taken was to prevent the subject from talking so as not to have an upward effect on the heart rate. After the seventh minute, the subject is allowed to cool down and cycling is stopped.
The procedure for the Bruce treadmill test involves allowing the subject to walk on a treadmill. At intervals of three minutes each, the speed and incline of the treadmill are increased. The subject is made to run the test until he comes to a point where he is fatigued and cannot continue again.

Results
Data work sheet for Astrand-Rhyming bike test.
Age 29 Gender Male Height 56. Weight 160
Time (min)Heart Rate (bpm)KGKGMMIN11222.575021262.575031322.575041292.575051262.575061392.575071282.5750
Data worksheet for the Bruce Treadmill Protocol
Age 32 Gender Male Height 58 Weight 160 HR  185.46bpm
HRmax x 0.85  157.64bpm
StageTime (min)Speed (mph)GradeMETS1 0-31.7104.72 3-62.5127.03 6-93.41410.149-124.21612.9512-155.01815.0

SMV02  (Sa x 0.2)  (S x GB x 0.9)
Where Sa  speed of the treadmill in meters per minutes (26.8 meters per min)
GB  Grade ( incline) of the treadmill in decimal form
S  Sa x 0.1
SMV02 2.1708 mlkgmin SMV02 3.4443 mlkgmin
Slope (b)  0.0289 V02max  4.2378 mlkgmin

Discussion
The results of this experiment have yielded results that verify the hypothesis that cardiovascular fitness can be examined through maximal and submaximal testing. The two protocols used, Astrand-Rhyming bike test and Bruce treadmill test, have given results that can be used to measure or predict the oxygen uptake. Basically, the VO2max (rate of oxygen usage under maximal aerobic metabolism) is the parameter being used as a measure of cardiovascular fitness.

The Bruce treadmill protocol yielded a VO2max of 4.2378 mlkgmin. The VO2max gotten here is a measure of the exercise capacity of the individual being examined. The value indicates the maximal capacity of the respiratory and cardiovascular systems to respond to the stress (in this case, exercise) and supply oxygen. It means that the maximum amount of oxygen that can be released is 4.2378mlkgmin. If the individual, who has a weight of 160kg, exercises at maximum level, the maximum amount of oxygen that can be released per minute is (4.2378 x 160) 678.048 ml. This therefore means that the two systems, cardiovascular and respiratory, cannot release more than 678.048 ml of oxygen per minute. In situations or exercises that involves almost this value or more, the individual being examined here would easily go into fatigue.

The Fick equation (VO2  Cardiac Output x a-vO2 difference) allows the rate of oxygen consumption to be determined if the cardiac output (CO) and arterial-venous oxygen difference (a-vO2 diff) are known (The Cardiovascular System and Exercise, 2009). The effect of exercise on each variable of the equation is as follows. The cardiac output is a function of the heart rate and stroke volume. During exercise, either or both of these two variables can increase, thereby causing an increase in the cardiac output. Exercise increases the difference between arterial and venous oxygen levels. During resting conditions, the difference is about 40ml of oxygen. However, once exercise commences, the diffusing capacity for oxygen increases almost three-fold. This results mainly from increased surface area of capillaries participating in the diffusion and also from a more nearly identical ventilation-perfusion ratio in the upper part of the lungs (Guyton  Hall, 2006). The end result is an increase in the partial pressure of oxygen in the arterial compartment.

Apart from exercise, other factors can increase the variables of Ficks equation. Anything that would cause an excitation of the sympathetic nervous system will cause an increase in the cardiac output. Any form of increased emotion (anger, anxiety, excitement) will cause an increase in the sympathetic stimulus to the heart. This then causes an increase in the heart rate and an increase in the effectiveness of heart contraction (contractility). Prolonged stress on the heart (e.g. a long term workload) causes it to increase in size (hypertrophy), and thereby increasing its pumping effectiveness.

Respiratory Exchange Ratio (RER) is the ratio of the volume of carbon dioxide produced divided by the volume of oxygen consumed on a total body level (Plowman  Smith, 2009). It is usually calculated using expired air. Respiratory Quotient (RQ), on the other hand, is calculated at the cellular level. It is defined as the ratio of the amount of carbon dioxide produced divided by the amount of oxygen consumed at the cellular level (Plowman  Smith, 2009). RQ can be conducted in connection with exercise tests that measure VO2max. When the RQ is 0.7, it indicates that fat stores are the only source of energy. When it is 1.0, it implies that only carbohydrate is being burned for energy. At a RQ of 0.85, it indicates that 50 fats and 50 carbohydrates are being burned. A high RQ indicates that the excess CO2 is derived from anaerobic metabolism. Even, a RQ greater than 1.1 is a criterion for a reliable maximal test. A low RQ below 0.7 means that the amount of fat being oxidized is not enough, and also there is no compensatory release of carbohydrates. Therefore, CO2 being produced is very low.

Conclusion
Submaximal and maximal testing are integral parts of medical and sports physiology. They are relatively easy to perform. The machines used are also safe, and cheap to maintain. These test modes are practical ways in which sports and health experts can measure or determine the maximum aerobic power of any person or groups of people. The reliability of the results gotten from these machines is good in comparison to protocols that involve direct testing of VO2max.

Emerging Technologies in New Media

New media and modern day internet based communication have been reshaped by many websites including Facebook and Google. Google Wave is a social platform that was released for public on 30th September 2009.  Google Wave has been one of the most important releases by Google and it has been much publicized. This paper shall cover key information related to Google Wave including key terms along with detailed information.

Introduction
It has been discussed by Roche, Valdez, and Douglas, (2009) that Google Wave is a new age communication platform, based on real-time requirements of users and social networkers. Jens and Lars Rasmussen developed Google Wave in an innovative manner that can be realized by millions of users and min purpose behind inventing Google Wave was to reshape email communication.

Figure 1 Google Wave logo
Unique features found combined on this platform include email, chatting, blogging, wikis, social networking as well as project management to build an email application which is more organized in browser communication client.  Google Wave is one of the best platforms to share all kinds of files allowing chatting to join ongoing discussions. A group of friends and business partners can carry out discussions.

Figure 2 Google Wave Interface
Many innovative features have been included in Google Wave but this paper shall cover some of these.
Features of Google Wave
Single Access

It has been mentioned by Van Grove, and Horton, (2010), that in certain kinds of email applications, there is a need to log in to chatting, emailing, office applications and blogging. In contrast, Google Wave lets its users to chat, blog, and email to communicate on a single platform. Google Wave is a revolutionary email based platform that has tied all of these applications on one platform to ensure easy communication between users.

Future Generation Sci-Fi Communication
Google Wave is more than an email platform. With videos, images, rich text, as well as maps, simple email is converted into something much richer. More conversations that are dynamic are reshaped on these email platforms.

It is argued by Andres, (2010) that Wave has changes the face of communications required and demanded by customers and Wave has blurred limits of communication. Contact list building is easier. Simple dragging and dropping mechanism is applied and dragging on the contact list can ad people to contact list.

Google Wave, as announced in May 2009, is still in its preview mode for developers as newly developed platform is still in its early stages.

Real Time Communication
One of the most attractive features of Google Wave is its real timer functionality, which is not offered by other email-based communication portals. On many occasions, it is seen that people online cannot seen what the others are typing. In contrast, as argued by Holzner, (2010) that Google Wave offers flexibility in which users see what others users are typing, alphabet to alphabet and character by character. Inline commenting is added as a feature and statements that are given in by many users are supported by their avatars.

Another important feature, as highlighted by Roche, and Douglas, (2010) is a playback feature. This playback option allows new participants who have just joined to roll back conversations post by post. Google officials have added that Google Wave is one of the most promising and successful email client proving to be an answer to all demands of customers and Wave shall prove to be groundbreaking in all email clients.

Higher officials and developers at Google have defined Wave to be composed of equal parts of conversation and text. Users and individuals work on this portal in a rich multimedia environment. All is combined on one portal as users can write reports, plan events, conduct research, stay in touch with business partners in online meetings and chat with friends. Wave thereby helps its users to become informed in a creative and a collaborating manner.

Embeddability
It is argued by Crumlish, and Malone, (2009) that Google Wave can be embedded on any website including blogs. These days it needs to be seen as how Wave fits in many applications in itself as Wave has particularly overlapped many applications

Wave Continuation
It is argued by Andres, (2010) that Wave is defined as a string of instant messages and emails. Participants of Wave are notified when a new change has been introduced in a particular Wave. Search feature in a particular Wave is activated as a participant can search content within a particular wave by searching for an object. Waves generated from other emails and instant messaging portals can also be linked.

Extensions And Applications
Important feature provided by Google to its users is an easy web based access. Common features exist with Facebook or iGoogle applications as users can create many applications to be used and these applications can range from real-time bots to complex games.

Wiki Functions
Any conversation written within Google Wave can be edited and changed by another user using Google Wave as all conversations in Google Wave platform are shared between Google Wave users. Thus any information being shared can be edited and appended and comments cab be made within an ongoing conversation.

Open source
Codes being used by Google Wave is open source thereby changes are welcomed from new developers. This will ensure innovation.

Conversation Playback
Google users can playback conversations in order to see what was sent in the past.

Natural Language Usage
Spelling correction facility is provided by Google Wave for users as they type. Auto translation mode is also observed in Google Wave that can make communication even easier.

Drag And Drop File Sharing
It has been argued by Crumlish, and Malone, (2009) that there is no system of attachment in Google Wave. Files can be dragged and dropped and other viewers can simple see the file.

In addition to these, many other features have been noticed that have set Google Wave apart from other email clients. Rich text editing facility along with facility of using other gadgets can make users use Google Wave easily and in an interesting manner.

It is however in line to be realized if Google Wave is another competitor of Facebook and Twitter. Alternatively, Google Wave can be a combined unified tool standing head to head against Microsoft or Cisco.

Some criticism has been received by Google Wave showing a need of room for improvement. Goggle Wave has been defined as evolutionary rather than revolutionary. Professional networkers on Wave need much more work. It is argued that Wave is not only for social users but it is a great communication tool for business users. Chatting daily life activities is not as important as chatting online with business partners.

Terminologies in Wave
In order to understand Wave as a complete communication platform, there is a need to understand a few terminologies being used by Wave.

Figure 3 Waves and Wavelets in Wave
Roche, Valdez, and Douglas, (2009) argue that Wave is a word referring to continuous communication and threaded conversations. Participants of these waves can be group of persons or robots. Wave is more like an entire history of instant messaging saved in history. Any topic discussed in a single chat or conversation is referred to as a wave.

Wavelet is a terminology also used for an ongoing conversation. However, wavelet is a subset of a larger ongoing conversation. Wavelet can be referred to as a single instant messaging conversation which in most cases is a part of great chat history. In this case a unique feature added is that wavelets can be added and managed separately from bigger waves.

Blip is the word used for single and individual message. Thereby blip can be defined and referred to as a single line of an instant messaging. Blips can be attached to other blips. These are known as children. Blips can be published and can remain unpublished as well.

Andres, (2010) have added that document is a content within a blip. This refers to actual content that often includes characters, text, documents within blips.

Extension is a kind of mini application that works within Wave. Thereby while a user uses Wave, applications can be made and used. These applications range in two kinds, robots and gadgets.  These two kinds of applications shall be discussed in upcoming sections.

Gadgets are applications that can be designed and used by users. It has been added that these applications are designed based on Googles open social platform.

Robots are defined as automated participants within Wave. Actions that can be performed by robots include interacting with waves and communicating with users. Outside information from other applications or websites can be added as Twitter and in some cases, Facebook.  Robots can also perform actions based on these applications.

Wave Gadgets

Figure 4 Google Wave Gadgets
Roche, and Douglas, (2010) argue that Wave gadgets are two main kinds of Google Wave extensions. Gadgets are known as fully functional applications. In order to change the look of Wave, these applications can be added providing its users most of the fun that can add fun to using Wave. In this case, it is seen that any application designed on Google social platform or iGoogle can be used in Wave. Thereby thousands of applications created by Google can be used in Wave. Moreover a gadget built by users using waves can also let users interact in a live manner with other Wave users. Thereby it can create an environment of online gaming in which all online users can participate. Here it can be seen that there is a resemblance with Facebook and Twitter in which gaming becomes online with many users interacting and contributing. Networks as Facebook and Twitter make use of friends networking as these make gaming and using application more fun and productive.

Crumlish, and Malone, (2009) have added that Waves can only be used by users in specific waves and no applications are specific to users thereby this is a contrast to Facebook and Twitter. Gadgets belong to all users within waves as compared to applications being specific to users in Facebook and Twitter. Applications do not have any titles having an ability to interact with ongoing waves in an interactive manner. Some gadgets already built and fed inside Wave include Sudoku gadget, Bidder as this has converted Wave into an auction centre, and Maps and it collaborates with Google Map.

Google Wave Robots

Figure 5 Google Wave Robots
Van Grove, and Horton, (2010) argue that Robots are another unique kind of Google extension. Robots are like other users within waves and these can interact with other users intervening in ongoing waves and conversations. Robots as compared to real life users are automated. Google Wave robots are more robust. Robots have the capacity to amend any information in waves and to communicate with other users and waves. Robots are forms of users thereby their behaviors can be changed as required. These robots can be made to perform functions as simple as correction of spellings and as complex as debugging. Robots already included in Wave includes Debuggy which is known as an in-wave debugger, Stocky which helps in pulling stock prices based on stock quote mentions and most of all Tweety which is known as a Twave robot, and this helps in displaying tweets inside of a wave.

Embeds In Wave
Figure 6 Google Wave Embeds
Andres, (2010) argued that embedding has been observed and followed by users of YouTube. In this case it is more complex just than embedding YouTube onto blogs. In this manner, Wave can interact with a third party website. Websites that have been embedded support many functions supported by Wave itself and this includes dragging and dropping files. Embedding feature in Wave is in its earlier stages. There are two main embedded web applications present in Wave and these include YouTube Playlist Discuss and Multiple Extensions Embed. A YouTube video can be discussed within waves and Multiple Extensions Embed allow for multiple interactions on different waves. It has been argued that Wave embedded items can be a replacement of many static comments. As more work is being done on these embeds, and if it does than perfection can be achieved and comments received on YouTube videos are to be replaced with waves.

Conclusion
Information technology has changed faces and revolution has been introduced known as Google wave. Google has brought many changed in communication platform. Email platforms have been revolutionized by Google Wave by including supporting features that make it more dynamic. Thereby communications have been reshaped by these new developments.

Web Search Report Worksheet

Name Gloria GonzalezSearch Engine  Google.com

Section 1 Features of the Search EngineIdentify which features can be found in your search engine. Your search engine may have one or multiple features.       Does it ___ Search the web  YES___ Search other search engines NO___ Act as a directory (lists of categories to select from)  YES
Does it___ Have an advanced search option  YES___ Have Help options  NO, there is no help feature on the main page, but there are help options on subpages (e.g., under the link Search Settings).___ Use Boolean Logic (and, or, not)  YES, and it affects the outcome of the search.___ Use TruncationWild Card Characters or Special Characters (, , )  YES___ Translate web pages  YES___ Allow field limiting searches (can you search for filetypes, site, intitle) YES___ Allow you to do a search within a completed search (find similar pages, offer terms to narrow your search)  YES, you can find similar pages.___ Recognize parentheses or quotation marks to search for phrases  YES, it will  prioritize webpages including these characters.___ Allow you to change the way the page is displayed (number returns on the page, preferred language)  YES___ List the number of pages in the database  YES___ Explain how the results are ranked  NO, no explanation is given, sites are simply
displayed in order of calculated relevancy.Please list other Noteworthy FeaturesYou can narrow the search from the start by choosing categories such as Books, Images, Video, etc.  Pre-category-selection serves as a filter that helps produce optimally relevant results.

Section 2 Overall Evaluation (Create a bulleted list for each question)After reviewing the key features of your search engine, answer the following questions What are the strengths of your search engine
Googles PigeonRank technology produces consistently accurate results
Multiple result-types are generated (ie. maps, images and websites) in a single search
Easy to use
Accessibility of the Google toolbar
What are the weaknesses of your search engine
Images are not easily filtered by size or type
What would you change about your search engine
I would include a sidebar that allowed further narrowing, for instance, file typesize.

Discuss factors that impact a resources credibility

Part 1
Credibility refers to the believability of some information andor its source. Research finds that credibility is a complex concept with two main dimensions expertise and trustworthiness. Other factors affect credibility perceptions as well.

Author and publisher credibility is a critical issue. It raises credibility when the authors have no experience on the subject, hold no degrees in related disciplines nor are they working in related professions. Credibility is also questioned when there is no author or publisher information, no bibliography or works cited list so information cannot be verified.

Karen (1997) goes on to say that some content-related issues can give you clues about authors reputation whether it is easy or not to verify the accuracy of the information from the source. Some information provided in the source may be inaccurate and it may lead to damaging results especially when the seeker of the information really depends on it. This may include health or diet resources.
Perspective is also a weighty factor when it comes to a resources credibility. The author may have a differing opinion from the reader and this may cause the resource being rendered incredible. Their claims and assertions may be very vague leaving no doubt that the resource is unclear and hazy in meaning. The writer may present a one-sided view that does not acknowledge or respond to opposing views. One may want to see if point of view, bias, or subjectivity is evident and whether the information is presented as fact, opinion, or both. With that in mind, a reader would want to see how this impacts the value of this source for their projects.

Coverage of a certain topic definitely impacts on the credibility of a resource. It is worth noting that some resources offer very shoddy information thus raising the credibility question about them. The author might also overstretch the scope of the topic.

Currency of a resource also impacts highly on its credibility. A researcher would not want to use a resource with outdated information when timeliness matters. Resources should strive to contain up-to-date information to be highly credible.

The relevance of the source also impacts on its credibility. The purpose of the source should be to provide readers with new information or direct them to additional information. It may also have a goal of explaining a concept or persuade them.

Part 2
We are all humans and human is prone to error. Although we all make mistakes, poor grammar and spelling can reduce credibility in a big way. Consider the addresses below
www.computertehcnoloyg.com vs. www.computertechnology.com

The first address clearly shows serious typos that really affect its credibility. Such carelessness already shows that the URLs credibility is in question. Unlike the second URL, it will not lead to any website. It will also make a researcher know that due to such lack of attention to detail, then the website itself cannot even be trusted.

Pamela (2003) also says that impression matters greatly here. The first URL already gives a researcher a bad impression of the website itself. This turns off the users and makes them question the credibility of the website.

Conclusion
As a reader, when 5researching, heshe should make sure that the source is able to answer the questions

What does this source say  Who says it  Why do they say it  What is their evidence  Where did they find the information  Why should you believe it  Is it known to be true  Is it the whole truth  Who else supports it.

What Are GMOs Are They Safe For All Age Groups

The three confusing letters G-M-O have already become inseparable from our daily activities. This abbreviation is everywhere on food product labels and is often included in newspaper editorials and headlines. A wealth of literature has been written about how GMOs can resolve food scarcity issues. Dozens of reports were published to confirm the safety of GMOs and their suitability for everyday use. Yet, the definition of GMOs and their safety remain the topics of the major professional and consumer concern.

The widely used letters GMO actually mean Genetically Modified Organisms. The latter are being extensively used in the development of genetically modified products. GMOs are produced by means of genetic engineering and biotechnology, which use living organisms to create a product or run a process (ADA, 2005). In scientific terms, GMOs and biotechnology imply the use of in vitro nucleic acid techniques and direct injection of nucleic acid into cell organelles or cells, or even fusion of cells beyond the taxonomic family to which they belong, for the purpose of overcoming reproductive and recombination barriers (ADA, 2005). These complex processes result in the development of modified features and traits in microorganisms, animals, and plants certainly, these traits go beyond the boundaries of traditional selection and cannot be reproduced via conventional breeding techniques (ADA, 2005). As a result, GMOs are the organisms and products that were modified by using genetic engineering or biotechnology techniques. Given the nature and the specificity of biotechnology, GMOs products should and actually possess features and traits that are unusual for their taxonomical family and that cannot be reproduced through conventional breeding and selection. GMOs are divided into the four broad categories foods containing living organisms, foods containing or derived from ingredients that were also derived from genetic modification foods containing single ingredients produced by genetically modified microorganisms and foods that contain ingredients processed by enzymes, which were produced with the help of genetically modified microorganisms (ADA, 2005). The question, however, is not in how to describe and categorize GMOs, but in how safe they are for consumers.

A mountain of research has been performed to confirm and reaffirm that genetically modified organisms are safe. Many organizations have published new statements of policy or reaffirmed or approved the existing statements related to food or agricultural biotechnology (ADA, 2005). This, however, does not mean that GMOs are safe for all age categories and population groups. Organizations actively work to promote the relevance of science-based evaluation methods and techniques, which businesses and companies should use for new plant varieties before they are produced commercially (GMA, 2006), but the promotion of scientific evaluation techniques alone cannot suffice to guarantee the safety of GMOs. In their current state, GMOs are still associated with health risks and controversies. Because these food products have not previously been in the food supply, there is no concerted agreement as to how human organism will react to them (ADA, 2005). Nevertheless, all GMOs that are currently available in the international market have passed all necessary safety assessments and are believed to be safe for human health in all age groups (ADA, 2005). Moreover, with the use of science-based approaches to biotechnology-derived products, GMOs are likely to enhance human health through due to improved nutritional value and reduced use of agrochemicals (ADA, 2005).

GMOs are genetically modified organisms that are produced with the help of genetic engineering biotechnology. Genetic engineering implies the use of in vitro nucleic acids to modify one or more genetic traits in plants, animals, and microorganisms that go beyond the boundaries of their taxonomic family. Such modifications are usually impossible during conventional breeding and selection. The safety of GMOs is still a matter of the hot professional debate. All GMOs currently available in the international market have passed all safety assessments and are believed to be safe for all age groups. However, only the use of science-based approaches can guarantee that GMOs are safe for all age groups and that they enhance human health through better nutritional value and reduced use of agrochemicals.

Integrated Circuit Materials

Many integrated circuits are mostly fabricated using quite a number of materials from metals and alloys of metals. The alloys form a wide scope aluminum-copper alloy, titanium, tungsten, an alloy of aluminum and Titanium, which are all joined to the main frame of the silica matrix according to Gibilisco. Alloys have a higher ductile strength which makes them withstand thermal forces in the circuit. Other materials which are semiconductors can be used but silicon has found most application because its refractory quality. The microchips are therefore fabricated from wafers which mostly comprise pure silicon. Iron is the material that is used to make the core that is surrounded or rather wound using wire which is made from copper whereas the pins are made from Copper-Zinc alloy (Brass). Brass is better in this application because the pure metals of Copper and Zinc have a knee in their stress-strain curve as in the case with most metals. This knee could cause failure in the circuit. Moreover, brass has also good conductivity.

The integrated circuit has the first two layers which are made on a substrate of a semiconductor which is mainly silicon. The first layer is a made from a high k substance. Gibilisco(1992, pp. 33-51) says that a high k substance is a common term that denotes a material having a high dielectric constant when compared to normal dielectrics like the oxides of silicon or the corresponding nitride. Such high k materials form part of the oxides of the transitional elements in the periodic table. These include oxides of hafnium or zirconium or even ytterbium although it is hard to come by. The second layer consists of the nitride of Titanium. Simon and Cavette (1996, par. 1-21) states that the second layer also has a configuration of a conductive region which may or may not be a barrier of diffusion as a main function required in the layer.

Multiple layers
Gibilisco indicate that the layers that are applied to make part of the electrical connections existing in between the layers of chips are made of metal obviously because of high electrical properties of the metals. The last chip is put under cover which is mainly used for protection and has copper wires used to link the chip to the circuit board of the computer. Copper is needed here because it is a good conductor of electricity and will be most appropriate in the transfer of electrical impulses. The chips of integrated circuits are semiconductor elements which are fabricated from wafers of semiconductors where a polymer material is used for the process of bonding. This bonding material mostly used in the fabrication of an integrated circuit is Atactic polypropylene. The Atactic polypropylene polymer according to Gibilisco has a usual additive which could be an anti-oxidant or a stabilizer.  

Low strength and low young modulus of the Atactic polypropylene makes it mesh well in the silica matrix.  It is able to yield a uniform bond all through the surface of the substrate as well as forms bond which have a standard release temperatures. Moreover, the thermal cyclic through the ranges of temperature used in the process of transferring technique does not lead to a change of the characteristics of bonding neither does it change the release temperatures. In simple terms, Simon and Cavette (1996, par. 1-21) explain by saying that the characteristics of the polymer are not affected by the processes that occur in the integrated circuit. The lithography process, patterns are defined through the application of a liquid which is viscous (scientifically referred to as a photo resist liquid) on the surface of wafer. The photo resist taken through a process of baking for the purpose of hardening. The photo-resist is removed selectively through projection of light in a rectile having mask information. As a circuit, there are auxiliary support boards as well as link the electronic components through conductive ways which are etched using sheets of copper. These copper sheets are laminated on a substrate which is non-conductive.

The layers of conduction are made of metal foils of the copper element according to the description made by Simon and Cavette (1996, par. 1-21). This is because of the good properties of electricity conduction of copper. Copper is needed here to enhance the functioning of the integrated circuit through conduction of electrical impulses. The layers of insulation which are dielectric are specifically manufactured through a lamination process using the prepregnation of the epoxy resin. The epoxy resin is a composite material which enhances the insulation function required in the circuit. Epoxy has a high tensile strength which makes it applicable at high temperatures and therefore can withstand heat generated in the circuit. Simon and Cavette (1996, par. 1-21) indicates that the board aforementioned is particularly coated with a mask of solder material which has a green color. There are various dielectrics that could be opted for to make provision for diverse values of insulation which is wholly depended on the demands according to the given circuit. These dielectrics include such materials like Teflon scientifically known as polytetrafluoroethylene, CEM-1, FR-4, CEM-3 or FR-1.

Many printed integrated circuit boards are fabricated using a copper layer over the whole substrate surface as described by Simon and Cavette (1996, par. 1-21). A process called etching is used to remove excess copper that is not needed. This is done after applying a mask which is made of glass material. From this paper, it is clear that the Silicon element has taken a center stage in the fabrication of an integrated circuit. It is the backbone of the whole integrated circuit fabrication. Towards the conclusion of the paper, the advantages of silicon over the rest of the other material elements are as highlighted below.

Advantages of Silicon in the IC
Silicon has found a major application in the fabrication process of an integrated circuit due to its characteristics of making a semiconductor substrate which forms the matrix of the integrated circuit. Moreover, silicon is readily available and thus makes the entire process economically feasible. The inscribed part of giving identity in such delicate parts of the IC is made possible using silicon.

Generally silica matrix has high softening temperatures as the circuit will often generate a lot of heat. Silicon is able to resists forces and fatigue due to thermal expansion because of its quality of being high refractory. Moreover, it has high thermal stabilities and high tensile strength (stress-strain ratio) when the rest of the other materials are incorporated to its matrix in this application.

TB on airline flights

Introduction
Tuberculosis (TB) is an infectious bacterial disease that mainly spreads through contact. The World Health organization estimates that people with TB, on average, infect 10-15 people, although not all those who get infected with the bacteria become sick. Some of them end up being carriers of the disease, that is, they retain the bacteria in the body, infect others but never become seriously ill. To get infected, one has to inhale the minute particles causing the disease, which is usually let out by a person suffering from the disease through coughing, sneezing, spitting or shouting. However, mere contact with person such as shaking hands or mere touching does not bring about an infection.

Airlines and TB
From the transmission modes listed, it is clear that presence of a TB-infected person in an airplane does not necessarily mean an infection for the other passengers. Unlike other respiratory diseases such as Severe Acute Respiratory Syndrome (SARS), whose highly infectious nature justifies an automatic total quarantine for it sufferers, TB is different because its infection is not automatic. In addition, some strains or stages of TB are no longer infectious, yet, for such facts to be established one needs to do a number of tests.

W.H.O guidelines
As is the case when dealing with health matters organizations have to rely on the guidelines issued by W.H.O as a minimum, and thereafter set their own extra guidelines if it so desires. The W.H.O guidelines for the handling of TB patients are contained in its 2006-2015 strategy, also known as stop TB strategy. According to (W.H.O), transmission of TB during flights caught the attention of the W.H.O in the early 1990s when a few cases of people who had been infected with TB started to emerge causing anxiety to a number of travelers. Even of more concern was the infection of the then new strain known as Multi-Drug Resistant TB (MDR-TB). In an effort to restore public confidence in air travel, the organization decided to issue guidelines on the handling of TB patients. The guidelines recognize pulmonary or laryngeal TB as the infectious strain, while at the same time saying that there is no evidence of any bacteriological or clinical infection that has been attributed to air travel exposure.

Air travel is the preferred mode of transport for millions of people each day and with such a number it is near impossible to screen each one of them individually. Given this bottleneck, airlines have to take austerity measures even before they take steps to conform to the W.H.O measures. The risk of transmission of TB is dependent on the length of time one is exposed to the germ-carrying particles, closeness to the carrier and the conditions in the plane such as level of crowding and ventilation. For an airline, especially those engaged in long hour travel, their first focus should be in creating an environment that does not provide ambient conditions for the bacteria to spread. It can do so by ensuring good air quality and reasonable spacing between the passengers.

Some countries demand that all people immigrating to their countries undergo a mandatory testing for TB with those found infected not being allowed to immigrate to the countries. Such countries include the US, UK, Australia and Switzerland. Some require screening to be done in the country of origin others screen them at the entry point while others do the screening at both points.

However, these screenings do not mean much to the airline industry because of their criteria. To begin with, the screening demanded by these countries only affects the asylum seekers, refugees and other immigrants. They are by no means the only people capable of carrying TB, meaning that even if all immigrants are ascertained to be free from the TB, the disease could still come from other quarters. Moreover, a good majority of those using air travels do not fall into these categories. Secondly, TB is transmitted within a very short time hence there is no assurance that the person may not become infected immediately after being declared free of the bacteria.

The W.H.O guidelines focus on passengers whose flights take more than eight hours because these are the people who pose the most challenge in curbing the spread of the disease. In formulating its guidelines, W.H.O is guided by among other things, the need to comply with country-specific laws and to maintain any patient rights inherent in any medical procedure. In addition, W.H.O notes that there are no proven cases of active TB that have ever been linked to infections during air travel (W.H.O). For that reason, precautions taken to minimize risk of TB transmission should have minimal effects on travel and trade (W.H.O).

First among the measures recommended by the W.H.O is the maintenance of passenger records. The information contained in airline passenger documentation should be as thorough as possible to enable easy tracing of passengers. The basis of this is that the guidelines do not provide for screening of passengers before flight, meaning that discovery of people infected with the disease mostly takes place after the flight. That means that if one is suspected to have contacted the bacteria during the flight then he should be easily traceable.

With the exception of those with infectious TB, the guidelines do not provide for denial of travel to TB patients. Those with the infectious strain should be excluded for travel and in case the disease is noted midway then it is recommended that the patient be isolated if possible. Under certain conditions, passengers on board flights that have a TB-infected person on board should be informed. Some of these conditions include duration of exposure, degree of infectiousness of the disease and proximity of the passenger to the other passengers. Other guidelines from the organization cover areas such as ventilation and health of the cabin crew.

Evaluation of TB threat
From the guidelines, it is clear that TB infection is not considered a major threat. In fact, for a case to be of concern it has to meet three conditions, which are they have to be long flights (8 hours), the disease has to be infectious and the passenger has to sit in close proximity with the potentially affected passengers. Without meeting the set criteria then the case will not catch the attention of the authorities. Yet, even with all that slack, the guidelines are still under severe criticisms for being too stringent and time wasting.  According to (Fiore), 13 studies have not linked any active TB to airline infection, and in addtion, the UK government issued guidelines that termed tracing of passengers as unnecessary. Instead, the government wants the passengers under the potential risk to just be informed that they are at risk of contracting the disease. Similarly,  (AFP) cites a study in which 2,761 passengers were screened after they had potential contact with a TB-infected person. Of the 2,761, only 10 returned a positive result, but with a mild infection incapable of causing an active infection. Accordingly therefore, the author dismisses the process guidelines issued by W.H.O as a waste of time and resources.

Conclusion
From the foregoing, it is obvious that TB is of little concern to most travellers. The airplane does not provide the relevant conditions the bacteria needs to survive and infect. The guidelines issued by both countries and W.H.O should be sufficient to give anyone engaged in air travel the confidence to do so. Passengers should instead worry about the other highly infectious diseases such as SARS.

2012 Date with Doomsday Article Critique

Is there Science Behind 2012 Prophecies by Laurie Nadel

Part A
In an interview conducted by Laurie Nadel (n.d.), Gregg Braden, author of the best-seller Fractal Time, discusses the scientific basis for the 2012 prophecies.  According to the interview transcript published online, the 2012 prophecies are based on a set of scientific findings that centers on the position of the Earth in relation to the Milky Way. Throughout the interview, Braden introduces several concepts that helps in understanding the events often attached to the year 2012. The interviewee argues that the changes brought about by 2012 are a result of a normal cycle that occurs every 5,125 years.

Braden explains that the year 2012 should be understood as the end of an Earth cycle and not the end of the world, as other people may put it (Nadel, n.d.). The Earth undergoes a particular cycle occurring every 5,125 years, which is similar to normal everyday cycles such as a 24-hour day or 60-minute hour (Nadel, n.d.). The difference lies on the length of time spent for every cycle, which affects the peoples understanding of the earth cycle or, worse, forget the event altogether.

The cycle refreshes during the period mentioned and brings about changes in the environment of the Earth, which can prove to be devastating for the human race (Nadel, n.d.). According to Braden, the cycle of 5,125 years refreshes on December 21, 2012 (Nadel, n.d.). At the said date, the Earth is aligned with the Milky Way without any obstructions (Nadel, n.d.). When this happens, the planet is exposed to the field of energy found in the said galaxy, which is considered to be a formidable source of energy (Nadel, n.d.). The magnetic energy, also referred to as magnetic filiments, has an effect on the Earth depending on the distance and tilt of the planet in relation to the galaxy (Nadel, n.d.).

Braden argues that former civilizations have failed in surviving the devastating changes of the cycle because they did not understood the changes they are experiencing (Nadel, n.d.). In order to learn more about the events that could potentially define the year 2012, Braden mentions that geological and archaeological are helpful tools as windows to the previous cycles (Nadel, n.d.). With this information, he emphasized that people are equipped with the needed information that can help in surviving the disasters that can be encountered during the year (Nadel, n.d.). In relation to this, there is a call for cooperation and heart-based living among the people with the assumption that collective action is among the factors that can help in survival (Nadel, n.d.).

Heart-based living, as mentioned earlier, is used by the author to show that people can make their own contributions, positively or negatively, that impact the magnetic field of the Earth (Nadel, n.d.). The phenomenon was first observed during the 911 bombing when the satellites located in the space showed variations in the magnetic fields of the Earth (Nadel, n.d.). In relation to this, it becomes important for the individuals to harbor positive feelings and live a coherent life to cause the magnetic field to move from disorder to stability (Nadel, n.d.).

In the end, the need for people to work together is highlighted. There are changes that can happen and people can fear about it. Nonetheless, it is important that they gain knowledge about it and respond to in a collective manner.

Part B
There are tools being constructed for the purpose of assessing the magnetic fields. The results are published online and the fields are updated daily and in real time. Aside from this, the Global Coherence Project has the purpose of educating people on strategies that can be taken to produce a coherent lifestyle. There are no changes required in terms of everyday living and that the steps are easy. There is no need to alter any habit, imploration, or thoughts. The only requirement is that individuals inculcate the heart in everyday living.  

Virtualization

Introduction
Virtualizations are technologies designed to provide a certain layer of abstraction between systems hardware and the underlying softwares. This concept provides a logical view of Information Systems resources rather than their physical view. Its roots can be traced back to disk portioning which divided a physical server into multiple logical servers. When this server division occurs, each logical server will be capable of running an operating system and systems applications independently.

According to David, Wade and Dave (2006, p4) this concept allows a computer systems resources to be systematically divided or shared by various multiple environments simultaneously. These same environments may interoperate or not and they may not even be aware that they are running within the virtual environments.

Todays physical abstraction occurs in several ways, for instance, servers and their workstations no longer need a dedicated physical hardware which is always required to run independent entities because they can easily run within a virtual machine. When these components run as virtual machines, computers hardware is emulated and this particular scenario is presented to an operating system as if the hardware was intact and it truly existed. This technology removes the dependence that the computers operating systems had with their underlying hardware.

It is also possible to run multiple Virtual management with different operating systems on the same machine simultaneously.

It is mostly a technique employed by different organizations owning at least three servers that are used to consolidate and make better use of the server resources. A well designed virtualization solution consists of servers, storage devices and virtualization softwares that reduce the number of server boxes, mitigate on the likely possibilities of hardware failures and reduces system downtime (Jim Kerr)

Types of Virtualization techniques
There are three different categories of virtualization, there is the storage virtualization, network virtualization and the server virtualization.

Storage Virtualization
This is basically the amalgamation of physical storage devices and it is mostly used in enterprise set ups. Here multiple storage devices are combined into one logical device which appears as a single storage device to users. By using logical resources, there is an abstraction that is created by hiding the complexities associated with the storage devices in use. This abstraction has been found to significantly improve on the management and administration of these storage devices.

When storage is treated as a single logical identity regardless of the hierarchy of physical media devices present, applications installed are capable of reading or writing into a single pool rather than the individual devices. Virtual devices reduce the physical one to one devices between storage devices and servers. Storage virtualization may occur on three levels server level, storage network and the storage systems level.

Server level Here virtualization may be implemented by the software that resides on the server itself and it is usually meant for storage devices present. With this particular software, the operating system forces the present server to behave as if it is communicating with a device type but it is just communicating to a virtual disk at this level and it also enables a user to add server and storage capacity easily without disrupting the existing operations.

Storage Network Level This is an open standard that aids in delivering the many to many  functionality that are critically needed in order to meet the storage requirements, some of the functionalities here may include scaling, virtualization, automation, simplification, interoperability finally investment protection. This particular level also supports management features and the IO price performance which is demanded in modern competitive IT environments and at the same time it avails storage component investment protection that is necessary in reducing capital expense.

Storage System level At this particular level virtualization is implemented on storage array controllers independently of the host. These controllers have the capability of creating virtual disks, snapshots and clones in collaboration with management software. This process is centrally managed in aid of a storage management server and a web browser.

Network Service Virtualization
Network Service Virtualization negates the need to acquire separate network devices each and every time a service is required from the network. Its value to the organizations and Chief Information Officer is tremendous with the service capable of bringing flexible management interfaces because the network operations have the options of managing many network services instances. Second, network equipment acquisition costs is reduced substantially since network service delivery is shifted away from physical devices to software images that extend network access without necessarily deploying special hardware in each instance that a service is needed. Third, it is easy to extend a network service hence increasing application performance.

Virtualized Network Its building block partitions a network into various logical networks that consists of unique attributes which may include, switching, routing, bandwidth, security e.t.c. Its architecture consists of three components.

Controlling Network Access Here the users are authenticated with authorization. These specific components identifies authorized users and automatically places them to their appropriate logical partition.

Isolating Paths This preserves network isolation across the entire organization. It also maintains traffic which has been partitioned over a routed infrastructure, it also maps isolated paths to virtual LANs and virtual services.

Virtual services  This service provides access to shared or dedicated network services which may involve DHCP, VOIP call management, DNS e.t.c

Server Virtualization
This may be defined as the creation of digital abstractions that represent real physical servers. Server utilization improves on tests and development environments. Physical server may be easily combined into some virtual units housed within few servers in which these virtual servers may continue to provide substantial benefits to their physical counterparts in a reduced physical package which eventually results into hardware and other investments savings.

Server consolidation helps in eliminating obsolete computer systems hardware, it also reduces systems administration overheads and it simplifies disaster recovery scenarios and may as well improve on the overall system availability.

A virtual machine can be a partition of a certain virtual server which may consist of CPU, memory, network and other resources such as disks which support one operating system installed in a virtual environment. Virtualization infrastructure must consist of host server which runs a virtualization operating system. The virtual host partitions virtual machines so that they may be used by other operating systems in which the host operating system controls the server and the virtualization softwares divides those resources among the many virtual machines and this results into a single server platform running a host operating system that is also capable of acting on behalf of various servers which run a variety of operating systems (CDW-G, p2) and according to Paul Venezia (2007,p25), organization that do not deploy virtualization techniques stand to spend excessive amounts in maintaining their datacenters with a number of organizations already having made the switch towards this concept and also going with innovations within the computer industry, it has already been proved that server virtualization comes with numerous benefits.

Benefits of Virtualization
Enterprises are increasingly being faced with challenges in storing  managing data in todays business environment. Some of these challenges include shrinking IT budget, aggressive competition, economic viability, internal headcount restraints and search of new creative approaches and also some common problems most IT professionals are faced with everyday include costs associated with purchasing additional storage which involves physical space and environmental costs, the operational time, time taken in order to bring the storage into service, administration costs associated with storage allocation, manpower to manage the storage concepts and increasing demand for data integration within a multivendor environment.

When the concept of virtualization is properly utilized, these problems reduce significantly and time, cost and space are economically utilized for the enterprise in order to provide for it the desired value.

Time
Virtualization within the controller level is likely to offer peak performances, for instance there are some applications that come with features such as the automatic load balancing, this helps the administrator in micro managing data placement, data replication and performance tuning in which these particular benefits enable the IT administrators in decreasing the time required in maintaining and managing their storage while at the same time increasing the power and availability of their available storage environment.

Cost Reduction
This concept also has the capability of reducing the total cost of ownership while still utilizing the existing assets and this has been cited as one of the top reasons most organizations are virtualizing servers.

There is also increased administrators productivity. When applications run in Virtual Machines, when spikes  surges happens or occurs, there is an automatic restarting on the new hardware with the net results being fewer emergency time for administrators, this concept is very easy to build.

Reclaimed Network ports
With most servers using at least two Ethernet port as well as two fiber channels, these ports are sometimes under utilized, Virtual Machines within the network share the ports allowing the re-use of network connectivity per every unplugged physical server (forester consulting, 2009,p9).

Reclaimed Data center capacity
By consolidating servers, space is reclaimed within the data center environment and this in turn also saves on the power bill.

Enhanced Business Continuity and Disaster Recovery
When an organization invests in virtualization they achieve faster application recovery with improved predictability. With virtualization organizations have realized that

There is an improved service level for more system Critical applications are always protected by expensive solutions and the concept also allows failed systems to automatically restart on newer systems.

Virtual machines run almost anywhere Due to their portability virtual machines are easy copied into another location for disaster recovery purposes and the virtual machines may be started on any server that has the users virtualization software installed and running.

Virtual infrastructure protect physical systems enterprises have deployed virtual infrastructure as a backup to their physical servers. By this, enterprises are converting their back up images of their primary systems into virtual machines and they have the option of restarting applications on virtual infrastructure (forester consulting, 2009, p9).

Faster time to market for new Applications
Virtualization enables an expedient response to customer requests much faster than the traditional physical servers. Virtualization is quick because

A lot of steps are eliminated from the build process The process of setting up a server, mounting racks, installing fiber channel, requires technicians and takes a lot of time to complete. In a virtualized environment there is no procurement, physical set up and all steps only take a matter of minutes.
Applications Move rapidly from test to production After an application has been successfully tested administrators have to transport the end product to the production environment, when virtual servers are deployed the same virtual machines that went through quality assurance can be easily copied to the production environment and started right up.

Future of Virtualization and its impact on Information Technology and Communication
Modern systems are developed on a tiered infrastructure whereby the client, server, applications, and storage are combined into a single functional stack. The various layer of the system are separated and they can be easily changed independently in order to employ best-of-breed technology, but a drawback towards that independence is that the applications installed have to cope with an increasing variety of platform configurations.

In creating a next generation infrastructure, services have to be separated first from the underlying hardware where they reside, this creates standard service interface across the Information Technology stack. With this separation, virtualization will allow the hardware resources to be grouped according to their capacity, computation and connectivity that are coherently managed across an enterprise infrastructure. Now, when this infrastructure has been virtualized, there is the potential of handling significant changes online without disrupting the servers, network and storage equipments.

There is also the Xen virtualization technology that allows users to run many operating systems on a single physical machine with an emphasis on its security, performance and isolation. By the virtue of Xen being an open source project it has a feature known as paravirtualization, this is a feature that that enables making changes to operating systems that run on top of Xen and it also improves on performances and simplifies the Xen feature itself.

According to Toby, Anthony and Robert (2009, p189)Virtualization makes it possible and easy to move to Software As a Service (Saas) oriented system because it is easer for independent software vendors to adopt Saas in the growth of virtualization. This has made it possible to access online services such as QuickBooks which is worlds number one small business accounting software with features such as online banking, there is I-Phone and BlackBerry capabilities, this enable more than 130,000 businesses that subscribe to services such as QuickBooks to manage their enterprises anywhere with or without a computer and they are also able to access Google Apps and Google Apps premier edition. These services relate to virtualization directly or indirectly and are know to impact heavily on Information Technology.

Conclusion
Virtualization is a new concept that modern enterprises use to maximize their use of computing resources. A well designed solution mitigates possible hardware failures and comes with proper techniques in place that significantly reduce the risks of downtime.

With virtualization being addressed at all levels of modern technologies, most powerful application of this technology can be found at the network, server and storage systems level and with modern solutions, different enterprise across the world can take advantage and deploy the various solutions available that are designed specifically for open ended systems.

With this particular concept in hand, organizations whether big or small may try deploy this particular technology in order to understand fully its benefits as it is known to reduce cost of maintaining the IT budget which also takes a huge chunk of the corporations budget.
Programming is a task which has hardly any relation to making symphony on a synthesizer. Instead it can be a blessing in disguise. Programming needs have the right skill married to creativity. For any novice this can be a herculean task.

Teaching programming has been regarded by some as one of the seven grand challenges of computing. (McGettrick, Boyle, Ibbett, Lloyd, Lovegrove,  Mander, 2005).

The journey of Novice to an expert can be categorized into three sequential steps.

Step1 Novices work on elementary things to improve their programming skills at the grassroots level.

Step2 In this step the Novice follow on the footsteps of their master in an attempt to achieve perfection.

Step3 The novice is adroit enough to build his own product.

 Practice should be distributed in bursts throughout the learning. While a few intense periods of
massed practice can produce short-term recall, better long-term retention occurs when intrinsic
load is reduced by well distributed practice (Fishman, Keller,  Atkinson, 1968).

Guiding novices in their learning is more effective than asking them to determine for themselves
what to explore (Tuovinen  Sweller, 1999).

The novices have to pay heed to every minute detail. They have to follow a holistic approach keeping in view all the requirements. Primarily the programmers have to get accustomed to the language syntax. This is the fundamental step of programming. After attaining sufficient amount expertise the novices should aim at building their own repository of programming solutions. The sequence of program semantics also plays a pivotal role in enhancing the knowledge. They may find it easier to do this if instructors first help them to find the focus line to epitomize
a piece of code, then gradually expand to groups of focal lines, and finally to see the entire code as a unified solution (Rist, 1989)

The choice of university is also very critical. A highly reputed University degree will propel the career in the right direction at a faster speed. This is because the recognition of University a varied level helps in attaining the right kind of expertise. If the degree is complimented by an apt certification then it can further boost the drive towards being an expert.

But the expertise and University degree comes at a premium. One needs to excel in every module that collaborates with the course. Also the novices have to be prepared to mortgage anywhere around 6 years to a decade of constant and tireless effort. Last but not the least one should be geared up financially. One is looking at investing anywhere around 300,000 to 800,000 of wealth. One can also not forget that besides the moolah and the time one would also require tons of dedication and your mind always engaged in working mode to achieve such kind of expertise. One has to subscribe to an old saying by Vincent Vogh Great things are not done by impulse but by a series of small things brought together.

Master Data Management for Financial Institute

1.0 Introduction
According to World Congress on Software Engineering, if companies are to withstand fierce competition, they must do more to satisfy their customers by utilizing the enterprise data at their disposal (Wang, Ming, and You 2009). Today, enterprises have a challenge in managing and utilizing their data resulting to unnecessary duplication of information and inefficient customer service. This is because of disorganized data and management practices. Data belonging to one company is scattered in different locations of the enterprises database hence resulting into lack of coordination among the companys departments in carrying out business with their clients

2.0 Background and Problem Statement
The major problem being faced by most financial institutions today is the capability of merging and centralization of customer details. This is to find out the investments base of an individual customer in their institution in order to reward them with discount benefits (Butler 2002). This is a scheme that is possible when a customers multiple investments are analyzed and discount benefits rewarded in relation to total investment into the companys growth. An individual may be a share holder and a holder of multiple active accounts in the same company. This may qualify them to some discount benefits offered by the institution. This is a problem observed by the author in an organization they are linked to.

Customers have the responsibility of informing the institution regarding all eligible investments in the institution such as account holdings and share holding. This is so as to get sales charge discount at the time of purchase or transaction. The company is also required through their systems in operation to recognize the total value of the customers investments in the company and determine the level of sales charge and service discount the customer qualifies for. This is where the author proposes the implementation of a Master Data Management (MDM) to be a solution to the above problem.

3.0 Research Objectives
This research paper focuses on the capabilities of a Master Data Management system to become a solution to the data management problem. The research objective uses the following questions to analyze the topic.

What is an MDM

What are the functions of an MDM and how are they capable of integrating and implementing a solution to the problem

What are the effects of implementing an MDM to the enterprise

Research on a Master Data Management System
In order to understand what a Master Data Management system is, we need to clearly understand what master data is. Master Data is the core data required for operations in a business venture (Mahmood 2009). Information that is treated as master data varies from different industries. Master data management can be defined as a system that encompasses the whole organization in order to integrate, manage and harmonize Master Data to make information useful in business decisions-making (Buffer and Stackowisk 2009). This is to enhance the organizations value (BIPM ENCYCLOPEDIA 2010).

It is observed that in the current business circles, connectivity between organizations has risen and amount of information has become extremely large resulting into a challenge in the method of handling it. With the information sharing environment growing fast, we also encounter the challenge of having packets or bundles of information scattered all over yet this information is mandatory for particular business operations for a particular organization (Sumner 2009).  Thus, an organization capable of coordinating its information sharing and collaboration of its various departments in using the information is extremely successful.

The Current Architecture
The existing infrastructure has been designed to cater for specific area of focus and the business application architectures here meet departmental organizational needs or particular processes. This has resulted into significant duplication of work. For example, a financial institution has been having customer details being handled by one system and sometimes updating from another branch becomes a problem or takes very long (Berson, Dubov and Dubov 2007). The costs of the institution are dispersed around the organizations units and tracking all of it is a problem. This results into hidden costs which bring imbalances in auditing. Loan processing is also a tiresome process because of heavily customized, disparate technology systems (Loshin 2008). Customers are also affected because their account holdings are in different record hence it is hard to establish the value of investment of an individual.

Berson, Dubov and Dubov (2007) explain about the sources of data which come from classical data warehouse. The latter provides data view for customers but do not support operational applications that need to access real time transactional data associated with a given customer. As a result, it does not provide a timely systems record for customer information. The Extract, Transform and Load tools are the ones that extract data from multiple data sources and transform them from the source formats to the target formats and load the transformed and formatted data into target database such as the CDI hub (Open Text Corporation 2010).

The Functions of Master Data Management
The Master Data Management is integrated across the system that contains the packets of master data. By merging, re-duplication standardizing, cleansing and other transformations, the integrated data is placed in a central repository called the Master Data Management hub (Loshin 2008). Master Data Management helps make Master Data to work at enterprise level instead of small units assets. Customer Data Integration (CDI) is important to achieve Master Data Management. A CDI is a special customer-data- focused type of MDM. It collects customer information and transforms it into a customer view from which one can analyze customers details from different sources into one single reference point while ensuring quality and consistency (Wang et al 2009). Master Data Management has significant operational functions and involves business decisions in implementing the MDM (Berson, Dubov and Dubov 2007).

In financial institutions, the master data represents the business objects that are shared across more than one transactional application. It is the business objects around which transactions are executed. In financial institutions, master data is for example customer personal information, their assets and account numbers, human resource, assets and products. In collecting master data from different departments, different processes and application systems are under different processes and formats. This will develop master data for a financial institution able to give customer details and all the investments in the company (Buffer and Stackowisk 2009).

Attributes of the Master Records

Master Data Management is a combination of several processes,

Applications and technologies consolidate, clean and augment the corporate master data and synchronize it with all applications, business processes and analytical tools. This results in significant improvements in operational efficiency, reporting and fact based decision- making (Buffer and Stackowisk 2009).

Oracle specialists suggest that to manage the master data system and make it updated, one needs to keep the MDM system current with high quality data. To maintain high quality MDM. One has to ensure they understand all possible sources and current state of quality data in each source. The data is to be put in one central repository and link it to all participating applications (Berson et al 2007).

The data should be managed according to business rules. Synchronization of central master data with enterprise business processes and existing connected applications is also called for. One should ensure the data stays in-sync across the information technology landscape. Leverage that a single version exists for all master data objects by supporting business intelligence systems and reporting.  In MDM implementation, the first step is to profile data, meaning that each master data business entity must be managed centrally in a master data repository and all existing systems that create or update the master data must be assessed as to their quality (Buffer and Stackowisk 2009).

Conclusion
With evidence from scholars and experts, it is deemed true that the Master Data Management is a solution to the information silo problem where there is duplication and inefficiency. Hence I propose that this system be adopted by financial institutions.

WIRELESS NETWORK FILE SERVERS

A wireless network is termed as technology that allows two or more computers, to communicate by enabling file sharing, printer sharing, internet connection, and using standard protocol but without the use of a network cable. A distributed file system (DFS) is a clientserver based architecture that allows clients to access and process data stored on a server as if it were on their own computer.

Distributed file system (DFS) in windows server 2003
The Distributed file system solution in Microsoft Windows 2003 provides two technologies DFS Namespaces and DFS Replication which together offer simplified access to files and WAN friendly replication.

Distributed File Systems will offer a number of advantages to the hospital. Through data distribution, it can be used to publish documents, software, and line-of-business data to doctors remotely. A folder is hosted by multiple servers so when a doctor requests a file through his phone, he is directed unknowingly to the folder which he should access fast. Through file sharing, a doctor can access files wirelessly because the process of setting up replicated folders is simplified in Windows Server2003by the introduction of replication groups and replicated folders. DFS Replication also allows the hospital branches with slow WAN connections to participate in replication using minimal bandwidth.

Conclusion
Since the doctors need up-to-date critical information remotely as fast as possible and DFSs typically use file or database replication (distributing copies of data on multiple servers) to protect against data access failures, all files are accessible to all users of the global file system and organization is hierarchical and directory-based.

New Developments in Information Technology Summary and Analysis

I. What You Need to Know About Google Buzz
A. Summary
The new feature in Gmail, generally known as Google Buzz, was not entirely received positively by users around the world (Richmond 2010). The main reason for the negative perception of the users toward this feature is that it may become a potential way of unknowingly sharing private information. In fact, Gmail users were not informed beforehand of the features implementation which in effect automatically signed up each user to the information sharing capabilities of Buzz, which not only possibly exposed ones status to the public but in a way also highlighted ones social activities and connections (Richmond 2010). Expectedly, the number of user-related concerns regarding the feature has led Google to reconsider some aspects of Buzz. Specifically, eliminating automatic connections to other social networking services, placing options for privacy, and disabling the service were among the most notable changes (Richmond 2010).  

B. Analysis
Googles attempts in hastily incorporating the Buzz feature in the account of every Gmail user should be considered as a vital lesson for all firms that focus on information technology. It is undeniable that automatically providing and signing up users to features which may compromise ones private information would not garner public support and instead would only attain complaints. Google should be relieved that Buzz was applied after Gmail has already acquired countless users. If they released such a feature through such means during the first few months of Gmail, then users may not have increased as it did. Hence, in a business perspective, the Google Buzz is proof that firms should always prioritize privacy, or else, the potential success of a product may be detrimentally affected.  

II. The Apple iPad First Impressions
A. Summary
The iPad, a new interactive and portable device considered to begin a new genre of gadgets is soon set to be released into the market. Definitely though, its appearance are not entirely unique. As a matter of fact, its resemblance to the iPod Touch is undeniable as the same colour scheme and base material have been used as well for the iPad (Pogue 2010). However, it is also evident that in relation to features, the iPad is definitely superior to the other portable devices offered by Apple. Specifically, aside from being used as a phone, the iPad also contains laptop-like capabilities such as watching, browsing, and productivity-based features as well (Pogue 2010). Unlike a laptop though, the new device is relatively smaller. To further expound, critics have pointed out that the iPad may be considered as a fusion of contemporary devices, with an emphasis upon the capability to read a book electronically (Pogue 2010). Of course, doubts regarding the devices success are still present.  

B. Analysis
The development of new devices such as the iPad, with its capabilities as an electronic book reader, is of vital importance in information technology as it signifies continuous progress. However, in relation to business, novel devices should not necessarily be made completely different from previous generation gadgets. The reason for this is that presenting a new device in a partly familiar fashion would prevent consumers from assuming that the device has a steep learning curve required for unlocking full functionality and capability. In addition, designing an original product such as the iPad after the iPod Touch would present an undeniable appeal to consumers as the latter is universally identified as an example of a globally successful product. Therefore, despite the presence of some doubts regarding the marketability of the iPad, given that Apple has taken the aforementioned steps then it would be most probable that such concerns would be unfounded.

III. Projecting a New Laptop
A. Summary
Projectors were previously regarded as pricey devices often used for high-end entertainment purposes or business applications. However, through certain advances in manufacturing technology, projectors would soon be incorporated into laptops (Williams 2010). In fact, one of the most well-known firms in the computer industry has made such claims regarding projector integration. Specifically, Hewlett Packard (HP) aims to release laptops with integrated projector lens which of course would allow users to project images to nearby walls albeit comparatively inferior to the images produced through conventional projectors (Williams 2010). Nevertheless, such a concept would definitely gain the interest of consumers and may indeed become a potential selling point. As a matter of fact, digital cameras with integrated projectors are already available as developed by Nikon which further implies that this technology is undeniably real (Williams 2010).    

B. Analysis
Mobile devices such as laptops are an important part of information technology. Thus, significant developments such as the integration of projectors would allow for new heights of convenience. However, business wise, firms which develop laptops should be careful in estimating whether such a technology would potentially become a common requirement of consumers. Considerations such as pricing and costs associated with research, development, and manufacturing must be carefully taken into account. In this sense, it would be best to first observe the markets response to HPs new line of laptops. If customer response and sales would be positive, then it may be an appropriate course of action for firms to follow HPs pursuits. In general, if integrated projectors would be a success, then new and more accessible means of providing presentations would surely emerge.  

OSHA goals, challenges, and use of Information Systems

The Occupational Safety and Health Administration (OSHA) is an agency that is under the umbrella of the United States Department of Labor. The agency was created in 1970 by the United States Congress under the Occupational Safety and Health Act that was signed by President Richard Nixon.

1. The organizations mission
The mission of the Occupational Safety and Health Administration is ensuring that illnesses, injuries and occupational fatalities are prevented in workplaces. OSHA achieves this mission by the provision and putting into force the rules and guidelines which are generally referred to as the workplace standards for health and safety. Ensuring of a healthy and safe workplace is the core mission of OSHA. OSHA maintains that all the workers are entitled to a working environment that is free from harms, illnesses and injuries. OSHA recognizes the importance of safety in the workplace environment in the improving of the economies of states. The need to address this problem of workplace safety in a growing economy is the main mission of OSHA and comes to be recognized after an observation that majority of illnesses and injuries are recorded among the newly employed workers in a rapidly growing economy (U.S. Department of Labor, n.d). This mission has been reflected in most private sectors where the employees and their employers have worked to eliminate job-related hazards.

OSHA authorizes and enforces all the standards that are developed under the Occupational Safety and Health Act of the 1970. It is the mandate of the OSHA to assist and encourage different states in their efforts to realize safer and healthful conditions of working. The agency has a mandate of providing support and guidelines for the states in their effort to provide information, research, training and education in the area of occupational safety and health.

2. The environment within which the organization operates
According to the Occupational Safety and Health Act of 1970, all the states are mandated to build up and operate their individual workplace safety programs. However, it is the OSHA that approves and supervises various state plans. This means that OSHA operates in different state governments that run competitive and not-for profit programs. It is often a requirement that before the states run OSHA programs, there has to be an assurance from the OSHA main office. OSHA provides up to 50 percent of the total operating costs of the approved plan (Rees, 1998, p.12).

To date, there are a total of 22 states as well as jurisdictions that are operating complete state plans. These state plans cover both the private sector and the state including the employees in the local government. In addition to the 22 jurisdictions and states that operate complete state plans, five other states, Illinois, Connecticut, new York, new jersey and the virgin islands have also joined in the covering the complete OSHA state plans although they cover only the public employees. For the state to get approval from OSHA for a developmental plan, the state has to assure OSHA that it will have in place, within three years, all the structural elements that are needed for an effective program of occupational safety and health (Elsie, 2000, p.22). The elements that are necessary include the appropriate legislation, a sufficient number of fully qualified enforcement professionals and the procedures and regulations for the setting of standards. The final approval of the state to carry out the OSHA programs is the ultimate accreditation.

3. A key challenge facing the organization in fulfilling its mission
Perhaps one of the most pressing problems that OSHA faces is the inefficiency of the existing Management Information System that has poorly addressed the enormous increase of silica exposure to workers in the marine, construction and general industries.  This has as a result led to the failure in the some of the implementations of various state plans that should contribute to its smooth running. This problem is not unusual for most of the agencies operating in many different states. It therefore requires a fully updated Integrated Management Information System to properly manage a number of programs in different states.

This problem of lack of a proper Integrated Management Information System has for instance resulted to the poor management of problems such as noise pollution and other management issues within OSHA and all its state offices. Perhaps, most of the regulations in the manner OSHA operates cause it to have problems of failures to implement most of the plans. This problem stems up because of its limitation constitutionally (Kohn, 2001, p.29). The lack of constitutional mandate has resulted to a number of mistrusts from the public.

According to the provisions of the Occupational Safety and Health Act of 1970, there should be a good working environment for all the workers which ensure that their safety is well addressed. OSHA gets misses the requirement of this provision when it comes to its capacity of being unable to regulate the noise pollution in a number of states that have already subscribed to it. The problem of noise pollution has been widespread especially the noise that comes from industries and automobiles has caused an enormous debate among the environmental activists and other policy makers including the public (Kohn, Michael, 2004, p.34).The current state of OSHA surveillance system is the reliance of the manual methods that have turned out to be erroneous, sluggish and ineffective in the process of accessing critical data regarding the problems such as noise pollution. There are also problems arising from the physical or manual methods of data interpretation. These problems have been augmented by the poorly updated Information Management System and the over- reliance on the human or manual methods of data analysis.

The use of weaker systems of Information Management Systems has also resulted to majority of companies to have little trust on the data that is obtained from OSHA. The agency which is expected to be authoritative to states and all the companies managed under it should have a consideration of updating the manner in which operations are carried out. An example of these updated include the critical update of the information systems which run most of the operations in the agency. OSHA inspects the working conditions of the employees in the states that have subscribed to it (Kim, 2006, p.36). The enforcement of the laws provided by the Occupational Safety and Health Act of 1970 may not be possible if there is some form of mistrust within the OSHA operations.

4. Use or potential use of information and information systems to help meet this key challenge
In meeting the challenge of poor data collection and analysis as well as other implementation problems, OSHA has identified key areas of tackling the problem. The implementation of the OSHA Information System (OIS) is core to the management of a number of challenges that have faced OSHA since inceptions. The OIS suite will ensure that there is sufficient compliance assistance, voluntary evaluation of programs, outreach and consultation. The suites will also incorporate the middleware of open standards and a data center that is central which will ensure the integration of data from the OSHA Laboratory, State Consultation Programs, State OSHA Plans and the Federal OSHA (Charles, 2000, p.10). The use of analytical tools by the adoption of the OIS will enable OSHA to generate reports that are necessary in the management of various operations. The problem of silica exposure for instance may be tamed with the applications of the advanced Information Systems.  

The adoption of the advanced tools of OIS will enable OSHA to easily identify fatalities, injuries and illnesses as well as provide the visibilities into the working populations having the highest illnesses and injuries risks. At the same time, this technology will enable OSHA to allocate the resources to the exact areas where they are most needed. This will ensure that no replications are made while allocating of the resources meaning that the process will be cost effective for the running of OSHA programs (Kohn, 2001, p.45).

The OIS suite will place OSHA in a capacity of reducing the injuries, fatalities and illnesses through the enforcement, outreach, and compliance assistance as well as through the consultations. It is expected that the OIS will fully support the expansion of e-government projects and also supporting all the strategic goals of OSHA. The OSHA goals that need to be achieved through the OIS will address some of the challenges such as the irreplaceable legacy of the software and hardware of the IMIS system, the absence of some applications that support various business processes that were implemented in 1992 and the inability of legacy IMIS to completely support the DOLOSHA missions and strategic plans (Weil, 1996, p.130).

5. Analysis of current or potential societal policy challenges from use of the information system
In the application of the appropriate information technology in the efforts of reducing the challenges that OSHA has been facing, a number of benefits have already been seen while other benefits are to be realized in the future. However, there are a number of challenges that affect the application of any technology with most of these challenges emanating from the society where these technologies are to be applied. The social concerns can greatly affect the way technologies are implemented and advanced in any setting and OSHA has to make clear some of these challenges and address them appropriately.

The access of information through the network poses a lot of controversies on who is to access the information in a legal manner. With the rise of computer network insecurities, setting up a system that manages information of different companies and states on a network will attract unauthorized access and abuse of privacy. According to the OSHA Privacy Impact Assessment (PIA), the application of the technology does not pose any dangers on the access of information by the unauthorized individuals and declares that it is safe for companies to embrace the technology because all the logical and physical security controls are fully implemented and all the personnel with the access into the personally identifiable information (PII) are cleared as per the requirements of the Homeland Security Presidential Detective 12 (HSPD-12) (Kohn, Michael, 2004, p.57)

The issue of cost of the technology still hampers the larger portion of its implementations in organizations. There is therefore a need to develop cost effective technologies that will easily be implemented while costing not too much. This issue however has been considered and OSHA proposes for all the information systems to be cost effective for easier operations.

6. How the organization might reduce concerns while still meeting its goals in the use of technology. The potential applications of OIS and other related technologies are intense which implies that the technology wills heavily impact the mode of operations within OSHA and the partners. It cannot be afforded to lose the benefits of such great tools because of the lack of properly addressing the needs of a society. At the same time, it is the society that holds OSHA and without the society OSHA will be functionless (Rees, 1998, p.61). It is therefore important to address issue of cost effectiveness and computer security for effective implementation of the technology.

Some of the steps that have to be taken to fully address the issues of privacy of the information over the internet include the heightening of the level of information security which will only allow the authorized users to access information in the network. The mounting of network security will keep the hackers at bay hence protecting the privacy of companies. The second problem is the affordability of the system that will be sued in the management of information. OSHA may be required to adopt the use of cheaper computer systems with affordable but still safer operating systems that will be cheaper to most of the companies.