Improving Extrasolar Planetary Detection through the Use of Simultaneous Differential Imager (SDI)

With the size of the universe, the Solar System could not have been in existence singly. Galaxies, which have ever become known to man and visible by telescopes, are approximated at a distance of 50 billion miles away. The existence of other planets orbiting around the star in a system other than our own is the thrust of continuous space explorations. In this light, it has been a promising curiosity for the layman and primary concern for astronomers to find out the existence of other planets outside our Solar System.

The study of the existence of Extrasolar planets or Exoplanets has established its own field in astronomy. Part of this discovery is to gain a substantial insight into their attributes which would necessarily involve a great deal of successfully recording their images for study. With this as the principal objective for most astronomical research, the need for improving the technology for recording images with a high degree of accuracy is of utmost importance.

The first planetary detection was in 1988 with the observations of a planet orbiting the star called Gamma Cephei by astronomers Campbell, Walker and Yang. However, with the limited technology in the early days of technological birth, most people were sceptic about the validity of the observations.  The discovery was supported the following year and in 2002, with improved techniques in planetary detection, it was verified.

In 1992, planets PSR 125712 were discovered and were immediately confirmed. These pulsar planets discovered by Wolszczan and Frail are believed to have been formed by a supernova and considered as the first definitive planetary discovery. The first definitive discovery of a planet orbiting around a main-sequence star was announced by astronomers Mayor and Queloz in 1995.

Planetary Detection and SDI
The early discoveries with the capabilities of technology had at the time expectedly developed the need for perfections. Technological advancement, particularly of spectroscopy, marked the age of modern extrasolar planetary detection. It paved the way for a lot of extrasolar planetary discoveries. However, the main problem in imaging techniques for extrasolar planetary detection remained to be the presence of speckle or photon noise in the image produced. This has been the primary concern of researchers on imaging techniques thus aiming at reducing speckle noise through adoptive optics and differential imaging.

Adoptive optics (AO) corrects the deterioration of image quality with the use of wave-font sensors sending signals to initiate corrections. Since astronomers focused on reconstructing degraded image, AO system, is actually a post-detection processing technique. The first system was used to correct the contrast of two-dimensional images. In 1982, a version was created for the AMOS in Haleakala. Thereafter, it was widely implemented for applications in military defence (Biller, et al 389).

Planetary detection imaging involves dealing with light sources coming from the stars around which the planets orbit. The aberration consisting of light waves produces light patterns and noticeable noise in the image. Adoptive optics (AO) has the capacity to lessen but not totally eliminate the light contribution. The failure of AO is in the inability to resolve and overcome contaminating speckles. Eliminating speckles will therefore, requires nearly perfect understanding of their nature as being influenced by variation in appearance as a function of varying AO exposure time understanding the variability of wavelength with the degree of micron-wavelength (interfering speckles of 1.5 micron being more than 2.1 microns wavelength) and marking out differences in speckles formation as a degree of optical path imperfections.

Other modern imaging techniques developed to address the speckles obstruction consist of the use of the Angular Differential Imaging (ADI) and the Simultaneous Differential Imaging (SDI).

Simultaneous Differential Imaging is sometimes referred to as Simultaneous Spectral Differential Imaging (SSDI). The ADI though slightly advanced but constructed with close similarity to the SDI addresses the problem of speckle noise in planetary imaging. It is a powerful technique in detecting extrasolar planets even with faint light source particularly those closely separated from their stars. It matches the contrast produced by SDI without requiring specialized optics and is much simpler to use than the SDI.

SDI is an instrument which utilizes a quad filter for capturing astronomical images. It entails a technique that uses a uniform optical path to determine lights of different narrow band filters at the same time. The multiple filters absorb unwanted speckles thus removing image fainting.  SDI technique is developed to resolve optical image resolution challenge. It utilizes the knowledge of accurate wavelength, optical path and timing to bring about a clear separation by eliminating obscurity. By employing data reduction techniques, the SDI takes the difference of the images simultaneously recorded by its quad filter, then aligning these taken images with the result of a reduced photon noise in the data. Effectively suppressing speckle in the image, it is considered as one of the first cameras dedicated to the discovery of new planets.

The SDI is employed at the VLT (Very Large Telescope) and the MMT (Muliti Mirror Telescope). The simultaneous taking of the difference in images is done with 1.62 mum methane bandhead and at 3 wavelengths. Images are simultaneously taken with the use of the quad filter and are in the gas giants and cool brown dwarfs spectrum. The reduction or attenuation of the speckle noise is possible by performing a difference of images.

Extrasolar planetary detection is extra difficult due to the intrinsic faintness of the planets which are especially close to the parent star and are much brighter. The use of VLT increases the resolution of the objects however, the parent star gets blurred because of the interference of the image of the Earths atmosphere. The distortions are addressed with the use of AO systems using deformable mirrors fixing the star to make a point of light as it should be.

Upon removal of the stars halo, the image produced shows the speckles in the area surrounding the stars centre and mimics the image of a planet. The light sources and the reflections in the optic path cause the speckles to show usually filling up the field closely surrounding the star. When the shape of the optics changes during the night, one can find the speckles roam on the image from time to time in the course of observation. This problem caused by the device is equally experienced with the use of Hubble telescopes.

The two major problems which the innovation of SDI technique addresses are the following a) the difference in the contrast between the planet and the star. Old giant gas planets, or those which are 2 Gyr, are fainter than primary although within the primary. On the other hand, young planets are fainter ten times as compared to a primary but are self luminous b) the use of photon noise limited AO systems for an object 10 times fainter and 1, its primary can be detected with an hour of exposure.  Speckles remain in the image filling up the area within 1 of a star. This is true even after correction with the use of AO system.

Reduction of the speckles is achieved by subtracting PSF images using a multi-channel camera whereby the images taken are normally found in narrow spectral bands.

The multi-channel camera converts the image with the help of a holographic diffuser in fixing the degraded illumination. Re-imaging takes place comparable to a convolution of the PSF image. The aberrations caused by the optics device get diverted to a convolution kernel in the channels. Better and more coherent images are then produced. The efficiency in speckle reduction would dramatically increase the rapidity of direct detection of extrasolar planets.

Conquering the problem of speckles starts with understanding of how exactly they are produced. Speckles are scattered light coming from the brighter parent star. They are naturally brought by the interaction of wavelength, light sources and the optic channels creating blotches as the light beams are reflected and refracted passing through varying filters and mirrors. The light patterns create the resulting speckles which roam around the image due to the optics consistent change in shape. Since the speckles are sourced from the star, they will have the same colour as the source, whereas, the planet will register a different colour.

Astronomers have standard answers to issues that may come up in the use of the optics device. It is, however, impossible to conquer all aspects of the problem. The development of differential imaging technique, whether angular or simultaneous, seeks to address and dissolve all aspects of the problem of speckles together with the AO system (McLean, 25).

The SDI imaging technique was pioneered in the year 2000 (Marois et al. 233). There is a strong CH CH4 (methane absorption) bandhead at 1.62 m in extrasolar giant planets (T  1200 K). The technique uses a subtraction routine whereby the star and speckles can be distinguished through the effective use of quad filters  any CH CH4 rich companion remains.

How exactly does the SDI technique work
SDI measures light in varying narrow-band filters. Since the images are taken simultaneously using the same optical path and differentiation is thereafter aligned, the resulting images would show identical speckles allowing them to be removed.  A double-Wollaston prism is utilized to split a light beam into two. Two prisms will produce four beams which pass through the quad filters taken at 3 different wavelengths (Source Our SDI Techniques).

Easy detection of the planet from the speckles produced would require a proper choice of narrow-band filters. Carbon and hydrogen are always present near the field of the parent star. On the other hand, Methane is present in the atmosphere of a planet which is much colder than the star around which the planet orbits. Cool objects would be much fainter. The heat of the star does not allow methane to form thus can be distinguished from the planet which is fainter.

Data reduction tasks are employed to align images taken in each of the filters with the use of a custom shift and a reduction routine. The routine formula for reduction of data does not at once eliminate the speckles but merely weakens them. The final touch is the rotation of the telescope. This would now reveal the planet because the image would rotate along with the rotation of the telescope, whereas, those which remained fixed in the image are mere speckles.

A survey of 54 nearby young stars at the VLT and MMT conducted by (Biller, et. al.) at the Steward Observatory of the University of Arizona, the researchers obtained H band contrasts 25000 (5  sigma  Delta  F1 (1.575 mum) 10.0 mag,   Delta  H  ,   , 11.5 mag for a T6 spectral type object) at a separation of 0.5 from the primary star. These SDI images have the highest image contrast obtained from ground or space.

One of the breakthroughs recorded was the unravelling of new images using the European Southern Observatory (ESO) VLT on the surface of Titan. Titan happens to be the largest moon containing in the Saturnian System. The SDI, which is an optical device, uses its very high contrast camera resolution to produce distant sharp images in three colours at the same time. The SDI device can equally be utilized to observe objects within a very thick atmosphere in the solar system (methane filled). This made it very possible for the discovery of Titan. According to a press release on ESO-Reaching New Heights in Astronomy, a simultaneous examination through a narrow, unobstructed near-infrared spectral window in a highly dense atmospheric humidity with methane alongside the presence of adjacent translucent waveband, revealed or produced images that clearly eliminated the contamination of the atmospheric components. The viewed detail of the produced objects surface was such with an unprecedented clarity and outline. Regions with different refractive and reflective capacities were picked on view this includes those of high and low reflectivity.  According to the press release, one of the essential advantages of the discovery is the possibility of the delivery of Huygens probe. Though further report is yet to ascertain the outcome, it added that the Huygens probe was projected to approach the Saturn system through a Cassini spacecraft. The final destination was aimed at Titan surface. The successful descent of Huygens probe on Titan surface will allow getting a more detailed report of the Saturn system (Markus Hartung, et al. 1).

Conclusion
The pursuit for a technology advancement in the field of extrasolar planet direct detection, the use of differential imaging techniques implemented with ground-based telescopes proved to be a vital step towards perfection of the technology.  The Planet Finder project using VLT and MMT provided groundbreaking and critical specifications for future advancement in the field of imaging techniques. The use of ground telescopes with the application of adoptive optic systems and implementation of simultaneous differential imaging technique radically changed and improved the future of successfully discovering extrasolar Giant Planets.

In few decades time, the vastness of the universe will be filled with a corresponding enormous knowledge that will help us understand the Solar Systems miniature existence among the billions of galaxies that are resident in the space. Extrasolar detection of planets will upgrade to an Exogalactic pursuit towards an infinite possibility that lies in space.

Medical Databases How Will the IT People Accomplish It

Many people are very mindful when exposing their financial information. However, maybe it is more alarming for someone to know that hisher medical records are being accessed by someone he does not know. This is discussed in the article by Brian Wheeler entitled Who is looking at your medical records published on the BBC website. IT practitioners are being challenged on how they are going to make a system which will satisfy the needs of the medical practitioners as well as that of the patients.

Much like any other database, only a limited number of people are given access especially since data stored in the database is highly sensitive. It has already been established that the database will greatly help forward medical research however, the privacy of the people becomes an issue. The people working on the database can limit the access of researchers. It must be programmed in such a way that only the attending physician can gain access to a persons medical records in the database. Researchers should be given access to medical histories but identifying information such as name and address should not be made visible. Researchers should however be able to view demographic data such as age and gender since such information is important in a research design.

How the programmers will program the database will determine if the technology will succeed or fail. They must balance between what the industry needs while at the same time protecting the interest of the general public.

Data Protection

Introduction
Organizations data is any business most critical asset. With explosion of corporate data in the 1990s, accumulation and management of data is now a priority. Organizations are now trying to accumulate different types of data on excessively large storage system and gathering of clients data, product vendor information, product data and manufacturing metrics has now become part of an enterprise goals. It is this management of data that is now a cause of concern within IT departments, corporate legal offices and the executive management with much focus being on protecting and managing data (Tom Petrocelli,2005 )

Data Protection Techniques
There is an urgent need to protect information and data has to be kept intact and it should be available on the event of a hard drive failure, some of the techniques that may be used for data protection and encryption may involve

Key based encryption algorithm This is the ability to specify a certain key or password and at the same time have the encryption method alter itself automatically such that each key or password is able to produce a different encrypted output which will require a unique key or password in order to be decrypted. The key may be either symmetrical or asymmetrical in which the encryption key (public key)will be very different from the decryption key (private key) so that any attempt to derive private key from the public key becomes completely impractical due to the number of hours which will be required in cracking it (R. E .Frazier, 2004).

Privacy Principles Privacy considerations may go a long way in ensuring users data is adequately protected. Before designing a protection scheme it should be determined who should have access to what data and under which conditions. This is elaborated on six principles laid out by Marc Langheinrich for guiding privacy-aware system design (Yitao  John).

Notice Users should be aware of their collected data.
Choice and consent Users of the data should be able to choose whether their data should be used or not.

Anonymity The system should be able to mimic real world norms.
Security There should be various levels employed depending on the situation at hand.
Access Users should have complete access to their data.

Here also a user IP address may also be obstructed hence protecting user anonymity in communication and data capturing. The users expose their identities but their locations are always concealed

Conclusion
It is of great essence to protect users privacy and data available on the system. Techniques for protecting data which may involve cryptography and other encryption methods should always be used whenever necessary since users data which is also the organizations data consists of unique characteristics which are very ideal for the organization marketing activities.

Digital Manipulation in Photography

In the dawn of the new century, Photography has evolved in many ways. One particular aspect would be in terms of technology. Photography, nowadays, are be packaged digitally. Instruments and equipment used in producing, enhancing and editing photographs are now in the form of digital images which are edited and manipulated using a program or software in a computer. This technique leads to stunning images and unique ones.  Through the course of the 21st century, digital photography has drastically changed the landscape of both advertising, editorial and commercial photography. For the most part, digital manipulation of images has been greatly used to  improve  the images and produce a whole new different meaning at perspective to an initial photograph.

In the recent years, digital photography has changed the meaning of the images in relationship to both advertising and editorial photography by creating an environment that is tampered by artificial elements. Enhancements and  photoshopping  strip away an images original meaning and replace it with a newly crafted one through digital manipulation. This meaning goes beyond the photograph itself. Recently, there has been an outburst of ethics against digital manipulation. Commercial advertising is the top avenue for this issue. Images of products or people have been manipulated in order to attract customers and improve their business. This scenario is said to have stripped photography of it true meaning. The impact of digital manipulation in photography as a truthful meaning has brought about much negativity in the sense that each image becomes a suspect of superb digital manipulation. Truthfulness and faith in each photograph is lowered because of much hype and use of digital manipulation. In todays world, a lie is synonymous to a digitally manipulated image. The reality should be upheld.

Borrelia burgdorferi

Borrelia burgdorferi is classified as a gram negative (spirochete-spiral shaped bacteria) belonging to the genus Borrelia. b. burgdorferi  occurring mostly in North America and is also found in Europe, Asia. This particular bacterium is the main cause of Lyme disease which is a zoonotic disease mostly transmitted by Ixodid ticks, Ixodes scapularis which is the deer tick and is a multisystem disease characterized by arthritis, neuritis and carditis. Lyme disease was named after a village in Connecticut in the United States of America where a number of cases were first isolated in 1975. It is observed that this disease was tick-borne the cause of the disease had however remained unknown until it was identified in 1982 (Engelkirk, and Duben-Engelkirk 87-89).

The epidemiology of this disease is around endemic areas of Soviet Union, southern Ontario, Australia, china, Japan and these cases are observed during the summer, the distribution of this disease also coincides with availability or abundance of ticks. This study seeks to address this disease by discussing its mode of transmission, incubation period, portal of entry, portal of exit, pathogenicity, factors influencing virulence, embalming implications and restorative art implications

Modes of Transmission
The mode of transmission for Lyme disease is mainly through the bite of an infected Ixodid  tick, through its saliva, transferring the spirochete  into the host and also contains substances that interfere with the immune response at the site of the bite, leading to multiplication and migration into the body and hence the infection. However, substantial evidence has shown that there are a number of non-insect related modes of transmission, including person to person contact through sexual intercourse and through the placenta from the mother to infant. Being a zoonotic disease, dairy cattle and other animals may acquire this disease and pass it to humans successfully through the food chain (Engelkirk, and Duben-Engelkirk 103). Borrelia burgdorferi can survive through the processes involved in blood purification for donated blood hence Lyme disease can be acquired through blood transfusion (Engelkirk, and Duben 106).

Incubation Period
The incubation period for this disease takes about three to thirty two days to manifest after exposure of the individual to the causative agent, with various symptoms including Tick bite related symptoms which include a spot that appears red in color at the location of the bite and this grows gradually, becoming bigger, normally with a pale middle part, referred to as erythematic migrans. Systemic symptoms may include fatigue, headaches, slight fever, swollen lymph gland and joint and muscle pains.

Portals of Entry and Exit
The skin or mucous membrane is the port of entry for the Borrelia burgdorferi bacteria and affects all the tissues and major organs in the body. Borrelia burgdoferi responsible for Lyme disease and especially the tick-borne strain posses a great challenge due to the fact that it affects the body fluids (Weintraub 54-56)). However, embalming which involves the removal of the body fluids to preserve the cadaver helps in stopping the spirochete from further migration in the susceptible host years.

Pathogenicity
The white-footed mouse is the natural reservoir for Borrelia Burgdoferi. Ticks usually pass on the spirochetes to the white- tailed deer, humans, and other warm blooded animals when they take a bloody meal from an infected animal. Borrelia Burgdorferi then invades the blood and tissues of infected mammals and birds. Once Borrelia burgdorferi spirochete has been inoculated into the skin it moves into the blood of infected birdsmammals, through the extra cellular matrix, due to its ability to bind to the components of the extra cellular matrix, for example, the platelets, red blood cells and epithelial cells (Norris 1320). Borrelia burgdoferria disseminates rapidly in the body, enters all tissues of the body and rapidly crosses the blood brain barrier. They then live inside the neurons and glial cells sometimes crossing the placental barrier and can infect the fetus at any stage of pregnancy. The bacterium Borrelia Burgdoferi produces bio toxins which have high tissue affinity, mainly neurotoxins with high molecular trophism for lipid structures like the nervous system, muscles, joints and lungs. When Borrelia burgdorferi invades the brain, it results into inflammatory and neurodegenerative disorder known as neuroborreliosis.

Factors influencing virulence
There are several factors that may predispose an individual to this disease, the Borrelia burgdorferi bacteria being a master of disguise posses the ability to lie dormant for several years and can be easily activated by conditions that may include increased stress levels, other infections that may be responsible in compromising ones immunity. Environmental temperature and humidity also influence the virulence of the bacteria in certain parts of the world.

Restorative art implication
Protective clothing could also be embraced in prevention, for example people in these areas can wear long sleeved shirts, gumboots, gloves when handling pets and other animals (Weintraub 42-43). However, embalming which involves the removal of the body fluids to preserve the cadaver helps in stopping the spirochete from further migration in the susceptible host years.

Conclusion
Currently, there still does not exist a standard method for growth of B.burdgorferi in vitro, although spirochetes can be detected in culture media after a period of three weeks some isolates are still not visible even after several months. Therefore a lot of research still needs to be done in the management and even in finding a vaccine of this disease that has great implications to both human and animal health. A more multidisciplinary approach may shed light on how to approach the prevention and even treatment of this disease.

The Web 2.0 Technology

Information technology enterprises all over the world are faced with security threats especially in the realm of the introduction of the Web 2.0.  Every person in the office feels insecure when their password has leaked to another party.  The effect is even worse when they tramp in to a malware while opening some social site or even they find an email attachment that they did not expect. The introduction of the Web 2.0 is expected to be a great threat to the I.T departments of many firms all over the world.  The social sites found on the Web 2.0 such as the face book pose a great danger of quick spread of malware, and data insecurity through attack by the virus or hacked programs.  However, some security measures have been discussed in this essay.

Web 2.0 is an interactive site that encourages many users to exchange view, ideas, jokes, and other forms of information in office or out of office so long as one has internet connections.  The Web 2.0 is a set of user-centered web applications that is designed to facilitate interactive exchange of information, collaborative and interoperability of activities as provided in the WWW.

It includes sites that offer blogs, video sharing, social networking services like the face book and the twitter, the Wikis among others.  With this technology, photos can be exchanged, games can be played, music, videos, electronic learning, travel, mobile, widget, fun sharing, storage services, management operations, collaboration, communication and business transactions as well as search operations offered within the system. The web applications are as the development of the Ajax web 1.0.  This has been possible through the advancement in technology which enables computer programmers and designers to generate more user friendly programs and post them for use through the internet services (Ever, 2006).  The Web 2.0 has included the RSS and the Eclipse on the initial features of the web 1.0 which are the Blogs and Wikis and other interactive features.

The development of the Web 2.0 is the creation of O Reilly who had been thinking about the web concepts of the next-generation.  Since its generation in the MediaLive international 2004, the copyright procedures are been worked on consistedly by the OReilly media, which is pursued through a series of annual conferences. However, there has been controversy and debates that are criticize that authenticity and originality of the Web 2.0 which is meant to mean the the new version of the World Wide Web but majority criticize the authenticity of this type of web and think that it is not a new version as it is said rather it is thought as a development of the traditional world wide web which is usually denoted as the Web 1.0.

Despite the controversy, it is important to acknowledge that Web 2.0 has enabled quite a variety of user friendly sites that encompasses a greater collaboration among the users of internet, information providers as well as more business related management assistance.  The web 2.0 is a good source of information through the wikipedia, downloadable book materials and journals with an increased frequency on the use of blogging activities as well as the news provision (Fraser, Dutta, 2009). The data entry processes are easy with the present Web 2.0 design since it allows amendment to the worldwide web information.

Unfortunately, the availability of the Web 2.0 interactive web has posed great danger to information security and data base management.  Most Information Technology administrators have complained that the availability of the social sites in the computer internet programs has posed a great danger to the authenticity and integrity of information and that business enterprises are now facing a greater challenge on data security.  This is due to the increased rate of the disposal of malware that are quickly moving through the internet social created environment in the facebook or even through posting of photos and games.  These analysts have further pointed out that the malware is more difficult to deal with that the normally spread internet viruses.

Data that was collected for the United States of America, Australia, and the U.K state the manifestation of the malicious software in most IT related firm is on a rise and most of these malware may not be easily removed form the work environment.  The rise is attributed to the majority of the employee using the socially enabled work environment interaction via the face book or games sites in the computer internet services available in office.  In the data that included 803 IT experts working in firms that have between 100-5000 human resources that used the internet service in their offices shows that 73 of the IT officer believed that the malware affecting their machines were not easy to clean as compared to the e-mail based viruses or worms. It requires more expertise for the majority of business enterprises to manage the web based threats that are likely to a great threat to most software and hardware in the year 2010.  About 80 of the IT administrators agree to the view that the web interactive site resulting form the development of the web 2.0 are a great threat to information security (Eddy, 2010).  The top most machines that need to be provided with maximum security include the mobile systems, the laptops, as well as more attention being required in information integrity checks, information confidentiality and prevention of data loss.

Most of the IT professionals confessed that the industry will have to spend more in the management of data security because their Microsoft operating systems were more vulnerable.  The web browsers were 24 vulnerable, the Twitter and the facebook were 23 at risk while the Adobe Flash was 24 vulnerable to the attack of these threats. Other vulnerable sites include the media down load (32), the used P2P networking (25) and the Web mail accounts (25). Thus the data shows that a quota of the companies is at risk in terms of information security and data management requirements are on a rise.  However, it is encouraging that most of the companies are have already installed data security software that can detect the threats as they enter the machine and can therefore ensure some security for their data.  Some of the installed software includes the spyware (57), the phishing (47), the SQL website injections (32) and virus protection taking the elephants share of protection (60).

Majority of the IT managers have set security measures guarding the employee use of the internet thus limiting the access to the social sites hence the overall effect of reducing the possibility of the spread of the threats through their machines.  The employers here admitted that they had to limit access to the social sites like the facebook and chatting unless it was necessary among the employees.  This is one of the commonly used security measure towards the spread of attack by the virus and malware among the majority of the SMBs companies.

The managers insisted that they would send occasional reminders to their employees as well as warn the new recruits against some social sites. Another way the companies are able to counter attack by such viruses would be  by ensuring that employees keep the latest versions of the malware vectors to assist in the detection and removal of the threats before attacking the system. Some software companies are working hard day in day out to generate programs that can counter the threats as they appear in the internet (Ever, 2006). Some of these programs can perform fast checks of the threats, and heal them at once restoring the machine working capability instantly.  However this is an extra cost to the IT firms.

Apart form the threats attacking the machines and software in an office, many critics argue against the Web 2.0 as means of wasting time in office while employees chat.  Logging to the facebook and other social sites in a work place are punishable offenses that may even result to loss of ones job.  Most financial institutions in the use firewall to deter employees from watching or visiting social sites. For example in British Gas, the Lloyds TSB in the U.K are good users of the protective firewall software.

In some institutions of finances, employees that are on facebook are not recruited to avoid the havoc of running after them every time. Elsewhere, in Barracuda Networks which is one of the major companies dealing with security software development, it has confirmed that their company employs the use of Web Filters to curb the problem of threats resulting from the use of facebook and Myspace sites.  The web filters bar the accessibility and use of the sites.

Many IT administrators declare that business managers that seek to have their lap tops connected to the company networks are at a great risk of losing their data through the web 2.0 programs. This is why managers are encouraged to avoid the use of open software platforms they are advice to use appropriate security software or web filters to deter their employees from accessing secured data through the web 2.0 connectivity as well as the social software invested threat zone.  The increased trend of threat infections to the IT firms as been seen as the main reason as to why most business leaders are against the use of the web 2.0. It is however inevitable that most security software firm are coming up to counter the threats that are being encountered in the information technology world.  This is a negative trend to the business managers who must spend more on data security (Fraser, Dutta, 2009).  It is rather inevitable that business leaders will have to spend more if they are to maintain data integrity, maintain the value and validity of the data over fairly long time.  As much as the managers oppose the entry of the web 2.0 in the IT, they will very soon be mandated to use it since it has the best network connectivity and efficiencies in dissemination of information with the best possible interactive sites favorable not only to the employees but the managers.

The Google tool that is found on the web 2.0 is one of the best tools that  is used to gain more knowledge on the world news as well as other company management strategies that may help the others managers to improve on their productivity.  The music and fun found on the web 2.0 is a motivating factor to most office workers because they concentrate more and are rarely bored by office work. Most of the employees find it fulfilling since they claim that they are now doing more work per unit time than they used to do without the social sites in the computer net works.

In conclusion, the web 2.0 is a good product that is worth embracing in different types of firms because the productivity of workers and the managers is increased.  However, the site needs to be installed and used with care thus data security measures are needed.  The web filter mechanism can be applied as a security measure.  The use of updated security software is the best approach towards data security.

Tidal Power as a Source of Renewable Energy for the UK

The gravitational force of the moon causes the waters of the earths oceans and seas to bulge along the axis directly pointed at the moon (Tidal Power 2010). These forces, coupled with centripetal and centrifugal forces resulting the earths rotation cause the rise and fall of oceanic tides. Tides are highest (spring tides) when the moon and the sun are in line thus pulling earths oceanic waters to one direction and lower (neap tides) when the moon and the sun describe a perpendicular axis centred on the earth (Tidal Power 2010).

Figure 1 Tidal range as affected by the moon and the sun (Currie et al 2002)
One lunar cycle takes approximately 4 weeks and the earth rotates about its axis once every 24 hours (Tidal Power, 2010). This causes a tidal cycle approximately every 12.5 hours. At a time when the world is striving to undergo a green revolution where renewable sources of energy that cause minimal damage to the environment are the future, the predictability of the tidal cycle makes this natural phenomenon a very potential source of renewable energy.

Potential Tidal Power Sites in the UK
The United Kingdom has many potential sites for the generation of tidal power. River Severn between Wales and England is very suitable for a barrage as is the Sound of Islay and Pentlands Firth in Scotland and Pembrokeshire (Sustainable Development Commission 2007). The UK, according to DUKES, had a total electricity generation of 385 Terra-Watt hours (TWh) equivalent to 43.9 Giga watts of electrical output (GWe). The total tidal power generation capacity of all tidal barrages, tidal streams and estuaries having at least a bank on English shores 5.57 GWe, meaning that the UK can source up to 13 percent of its total electricity requirements from the harnessing tidal energy (Smith 2010).

The table below is a summary of the potential tidal power sites and their projected capacity.
SiteProjected Electrical Output (TWh)Severn 25.00Solway   9.66Morecambe Bay   5.98Wash   3.70Humber   1.65Thames  1.37Dee  0.89Mersey  0.57Total48.82
Table 1 Tidal site capacities in the UK (Smith 2010)

Design and Technology Considerations
To harness tidal power, current technologies allow either the construction of a tidal barrage or utilisation of tidal streams. A barrage is an installation at a bay or an estuary that lets water flow through it as the tide rises (Tidal Power 2010). When the tide stops, gates are closed, effectively damming water in the basin behind the barrage bearing a hydrostatic head. This water can then be delivered through these gates to drive turbines, generating electricity.

The diagram below is an illustration of a simplified tidal barrage.

 Figure 2 Simplified tidal barrage (Currie et al 2002)
There are various turbine designs available for use in barrage power generation. In a bulb turbine, water flow is around the turbine. The disadvantage of this design is that maintenance requires water flow to be stopped, causing time delays and loss in generation. In a rim turbine, the alternator is connected perpendicular to the waterway foe easier access and maintenance (Tidal Power 2010). The disadvantage with this arrangement is difficulty in power generation regulation. Tubular turbines are most recommended for the UKs greatest potential tidal site, the Severn Estuary (Sustainable Development Commission 2007). In a tubular turbine, the blades are coupled through an elongated shaft and oriented at an angle in such a way that the generator is at the top of the barrage.

(a)

(b)

(c)
Figure 3 (a) a bulb turbine, (b) a rim turbine, and (c) a tubular turbine. (Currie et al 2002)

The implementation of tidal generation plants in the UK has been slow because of the high initial costs involved and lack of technologies that do less harm to the flourishing marine ecosystems around estuaries and lagoons (Smith 2010). Feasibility studies and further research of employing tidal streams in the deeper seas should be carried out to tap this source of green energy.

Power Available From a Barrage
Figure 4 Diagrammatic representation of a barrage (Currie et al 2002)
If  is density of seawater (kgm3), g the constant of gravity, Cd the barrages discharge coefficient and A is the approximate area of the basin (m2), then at any instant the power derivable from the turbine is given by

Z1 and Z2 are the levels of the water in the sea and the basin respectively.

Economic Factors
Barrage construction requires large investment capital. Private investors are reluctant to take up such projects since the payback period is long. The UK government and should therefore either directly invest or attract able long-term investors to tap this energy source (Sustainable Development Commission 2010). After initial installation, maintenance is minimal and a turbine may function for over 30 years.

The cost-effectiveness of tidal power stations is determined by size of barrage and height difference between the low and high tide (Tidal Power 2010). The feasibility of tidal generators is directly proportional to the ratio of the barrage length to its annual generation in Kilo-Watt hours. This factor is behind the initiation of the Swansea, Fifoots Point and the North Wales tidal generation stations in Wales since tidal ranges are high.

Environmental and Social Impacts of Tidal Power Generation
Tidal energy is renewable, providing electricity without emitting greenhouse gases or any toxic by-products (Sustainable Development Commission 2007). If it were efficiently harnessed, tidal energy would reduce reliance on nuclear generators which cause thermal pollution and radioactive radiation. However, it there is a risk of disrupting the marine and shoreline ecosystems.

Damming bays or estuaries could also affect the geography of the shoreline, affecting recreational activities, fishing and shipping (Sustainable Development Commission 2007). However, construction of tidal power generating station should take the example of the La Rance barrage that has been operational in France since 1966 causing negligible disruption to the ecosystem and recreation activities.

Potential for Reducing Carbon Emissions
The Severn Barrage, if completed, is estimated to save 18 million tons of coal every year. If other feasible projects are built, then the levels of carbon emissions could be reduced significantly (Sustainable Development Commission 2007). At a time when climate change and global warming are posing a serious threat to world ecosystems and the survival of humanity, tidal energy should be appreciated as a way of powering homes and industries while decreasing carbon emissions to the atmosphere.

Conclusion
An increasing demand for energy has brought about environmental degradation due to the use of fossil fuels that release carbon into the atmosphere. Mining in itself pollutes the environment and non-renewable sources of energy are on their way to exhaustion. This is a call to the UK and the world to invest in renewable energy. The UK is endowed with enough natural resources to develop tidal energy into a major contributor of its electricity requirements (Smith 2010). Capital should therefore be raised to realise this potential as it offers the opportunity of a more sustainable future of the energy sector that conserves the environment.