Data Protection

Introduction
Organizations data is any business most critical asset. With explosion of corporate data in the 1990s, accumulation and management of data is now a priority. Organizations are now trying to accumulate different types of data on excessively large storage system and gathering of clients data, product vendor information, product data and manufacturing metrics has now become part of an enterprise goals. It is this management of data that is now a cause of concern within IT departments, corporate legal offices and the executive management with much focus being on protecting and managing data (Tom Petrocelli,2005 )

Data Protection Techniques
There is an urgent need to protect information and data has to be kept intact and it should be available on the event of a hard drive failure, some of the techniques that may be used for data protection and encryption may involve

Key based encryption algorithm This is the ability to specify a certain key or password and at the same time have the encryption method alter itself automatically such that each key or password is able to produce a different encrypted output which will require a unique key or password in order to be decrypted. The key may be either symmetrical or asymmetrical in which the encryption key (public key)will be very different from the decryption key (private key) so that any attempt to derive private key from the public key becomes completely impractical due to the number of hours which will be required in cracking it (R. E .Frazier, 2004).

Privacy Principles Privacy considerations may go a long way in ensuring users data is adequately protected. Before designing a protection scheme it should be determined who should have access to what data and under which conditions. This is elaborated on six principles laid out by Marc Langheinrich for guiding privacy-aware system design (Yitao  John).

Notice Users should be aware of their collected data.
Choice and consent Users of the data should be able to choose whether their data should be used or not.

Anonymity The system should be able to mimic real world norms.
Security There should be various levels employed depending on the situation at hand.
Access Users should have complete access to their data.

Here also a user IP address may also be obstructed hence protecting user anonymity in communication and data capturing. The users expose their identities but their locations are always concealed

Conclusion
It is of great essence to protect users privacy and data available on the system. Techniques for protecting data which may involve cryptography and other encryption methods should always be used whenever necessary since users data which is also the organizations data consists of unique characteristics which are very ideal for the organization marketing activities.

Digital Manipulation in Photography

In the dawn of the new century, Photography has evolved in many ways. One particular aspect would be in terms of technology. Photography, nowadays, are be packaged digitally. Instruments and equipment used in producing, enhancing and editing photographs are now in the form of digital images which are edited and manipulated using a program or software in a computer. This technique leads to stunning images and unique ones.  Through the course of the 21st century, digital photography has drastically changed the landscape of both advertising, editorial and commercial photography. For the most part, digital manipulation of images has been greatly used to  improve  the images and produce a whole new different meaning at perspective to an initial photograph.

In the recent years, digital photography has changed the meaning of the images in relationship to both advertising and editorial photography by creating an environment that is tampered by artificial elements. Enhancements and  photoshopping  strip away an images original meaning and replace it with a newly crafted one through digital manipulation. This meaning goes beyond the photograph itself. Recently, there has been an outburst of ethics against digital manipulation. Commercial advertising is the top avenue for this issue. Images of products or people have been manipulated in order to attract customers and improve their business. This scenario is said to have stripped photography of it true meaning. The impact of digital manipulation in photography as a truthful meaning has brought about much negativity in the sense that each image becomes a suspect of superb digital manipulation. Truthfulness and faith in each photograph is lowered because of much hype and use of digital manipulation. In todays world, a lie is synonymous to a digitally manipulated image. The reality should be upheld.

Borrelia burgdorferi

Borrelia burgdorferi is classified as a gram negative (spirochete-spiral shaped bacteria) belonging to the genus Borrelia. b. burgdorferi  occurring mostly in North America and is also found in Europe, Asia. This particular bacterium is the main cause of Lyme disease which is a zoonotic disease mostly transmitted by Ixodid ticks, Ixodes scapularis which is the deer tick and is a multisystem disease characterized by arthritis, neuritis and carditis. Lyme disease was named after a village in Connecticut in the United States of America where a number of cases were first isolated in 1975. It is observed that this disease was tick-borne the cause of the disease had however remained unknown until it was identified in 1982 (Engelkirk, and Duben-Engelkirk 87-89).

The epidemiology of this disease is around endemic areas of Soviet Union, southern Ontario, Australia, china, Japan and these cases are observed during the summer, the distribution of this disease also coincides with availability or abundance of ticks. This study seeks to address this disease by discussing its mode of transmission, incubation period, portal of entry, portal of exit, pathogenicity, factors influencing virulence, embalming implications and restorative art implications

Modes of Transmission
The mode of transmission for Lyme disease is mainly through the bite of an infected Ixodid  tick, through its saliva, transferring the spirochete  into the host and also contains substances that interfere with the immune response at the site of the bite, leading to multiplication and migration into the body and hence the infection. However, substantial evidence has shown that there are a number of non-insect related modes of transmission, including person to person contact through sexual intercourse and through the placenta from the mother to infant. Being a zoonotic disease, dairy cattle and other animals may acquire this disease and pass it to humans successfully through the food chain (Engelkirk, and Duben-Engelkirk 103). Borrelia burgdorferi can survive through the processes involved in blood purification for donated blood hence Lyme disease can be acquired through blood transfusion (Engelkirk, and Duben 106).

Incubation Period
The incubation period for this disease takes about three to thirty two days to manifest after exposure of the individual to the causative agent, with various symptoms including Tick bite related symptoms which include a spot that appears red in color at the location of the bite and this grows gradually, becoming bigger, normally with a pale middle part, referred to as erythematic migrans. Systemic symptoms may include fatigue, headaches, slight fever, swollen lymph gland and joint and muscle pains.

Portals of Entry and Exit
The skin or mucous membrane is the port of entry for the Borrelia burgdorferi bacteria and affects all the tissues and major organs in the body. Borrelia burgdoferi responsible for Lyme disease and especially the tick-borne strain posses a great challenge due to the fact that it affects the body fluids (Weintraub 54-56)). However, embalming which involves the removal of the body fluids to preserve the cadaver helps in stopping the spirochete from further migration in the susceptible host years.

Pathogenicity
The white-footed mouse is the natural reservoir for Borrelia Burgdoferi. Ticks usually pass on the spirochetes to the white- tailed deer, humans, and other warm blooded animals when they take a bloody meal from an infected animal. Borrelia Burgdorferi then invades the blood and tissues of infected mammals and birds. Once Borrelia burgdorferi spirochete has been inoculated into the skin it moves into the blood of infected birdsmammals, through the extra cellular matrix, due to its ability to bind to the components of the extra cellular matrix, for example, the platelets, red blood cells and epithelial cells (Norris 1320). Borrelia burgdoferria disseminates rapidly in the body, enters all tissues of the body and rapidly crosses the blood brain barrier. They then live inside the neurons and glial cells sometimes crossing the placental barrier and can infect the fetus at any stage of pregnancy. The bacterium Borrelia Burgdoferi produces bio toxins which have high tissue affinity, mainly neurotoxins with high molecular trophism for lipid structures like the nervous system, muscles, joints and lungs. When Borrelia burgdorferi invades the brain, it results into inflammatory and neurodegenerative disorder known as neuroborreliosis.

Factors influencing virulence
There are several factors that may predispose an individual to this disease, the Borrelia burgdorferi bacteria being a master of disguise posses the ability to lie dormant for several years and can be easily activated by conditions that may include increased stress levels, other infections that may be responsible in compromising ones immunity. Environmental temperature and humidity also influence the virulence of the bacteria in certain parts of the world.

Restorative art implication
Protective clothing could also be embraced in prevention, for example people in these areas can wear long sleeved shirts, gumboots, gloves when handling pets and other animals (Weintraub 42-43). However, embalming which involves the removal of the body fluids to preserve the cadaver helps in stopping the spirochete from further migration in the susceptible host years.

Conclusion
Currently, there still does not exist a standard method for growth of B.burdgorferi in vitro, although spirochetes can be detected in culture media after a period of three weeks some isolates are still not visible even after several months. Therefore a lot of research still needs to be done in the management and even in finding a vaccine of this disease that has great implications to both human and animal health. A more multidisciplinary approach may shed light on how to approach the prevention and even treatment of this disease.

The Web 2.0 Technology

Information technology enterprises all over the world are faced with security threats especially in the realm of the introduction of the Web 2.0.  Every person in the office feels insecure when their password has leaked to another party.  The effect is even worse when they tramp in to a malware while opening some social site or even they find an email attachment that they did not expect. The introduction of the Web 2.0 is expected to be a great threat to the I.T departments of many firms all over the world.  The social sites found on the Web 2.0 such as the face book pose a great danger of quick spread of malware, and data insecurity through attack by the virus or hacked programs.  However, some security measures have been discussed in this essay.

Web 2.0 is an interactive site that encourages many users to exchange view, ideas, jokes, and other forms of information in office or out of office so long as one has internet connections.  The Web 2.0 is a set of user-centered web applications that is designed to facilitate interactive exchange of information, collaborative and interoperability of activities as provided in the WWW.

It includes sites that offer blogs, video sharing, social networking services like the face book and the twitter, the Wikis among others.  With this technology, photos can be exchanged, games can be played, music, videos, electronic learning, travel, mobile, widget, fun sharing, storage services, management operations, collaboration, communication and business transactions as well as search operations offered within the system. The web applications are as the development of the Ajax web 1.0.  This has been possible through the advancement in technology which enables computer programmers and designers to generate more user friendly programs and post them for use through the internet services (Ever, 2006).  The Web 2.0 has included the RSS and the Eclipse on the initial features of the web 1.0 which are the Blogs and Wikis and other interactive features.

The development of the Web 2.0 is the creation of O Reilly who had been thinking about the web concepts of the next-generation.  Since its generation in the MediaLive international 2004, the copyright procedures are been worked on consistedly by the OReilly media, which is pursued through a series of annual conferences. However, there has been controversy and debates that are criticize that authenticity and originality of the Web 2.0 which is meant to mean the the new version of the World Wide Web but majority criticize the authenticity of this type of web and think that it is not a new version as it is said rather it is thought as a development of the traditional world wide web which is usually denoted as the Web 1.0.

Despite the controversy, it is important to acknowledge that Web 2.0 has enabled quite a variety of user friendly sites that encompasses a greater collaboration among the users of internet, information providers as well as more business related management assistance.  The web 2.0 is a good source of information through the wikipedia, downloadable book materials and journals with an increased frequency on the use of blogging activities as well as the news provision (Fraser, Dutta, 2009). The data entry processes are easy with the present Web 2.0 design since it allows amendment to the worldwide web information.

Unfortunately, the availability of the Web 2.0 interactive web has posed great danger to information security and data base management.  Most Information Technology administrators have complained that the availability of the social sites in the computer internet programs has posed a great danger to the authenticity and integrity of information and that business enterprises are now facing a greater challenge on data security.  This is due to the increased rate of the disposal of malware that are quickly moving through the internet social created environment in the facebook or even through posting of photos and games.  These analysts have further pointed out that the malware is more difficult to deal with that the normally spread internet viruses.

Data that was collected for the United States of America, Australia, and the U.K state the manifestation of the malicious software in most IT related firm is on a rise and most of these malware may not be easily removed form the work environment.  The rise is attributed to the majority of the employee using the socially enabled work environment interaction via the face book or games sites in the computer internet services available in office.  In the data that included 803 IT experts working in firms that have between 100-5000 human resources that used the internet service in their offices shows that 73 of the IT officer believed that the malware affecting their machines were not easy to clean as compared to the e-mail based viruses or worms. It requires more expertise for the majority of business enterprises to manage the web based threats that are likely to a great threat to most software and hardware in the year 2010.  About 80 of the IT administrators agree to the view that the web interactive site resulting form the development of the web 2.0 are a great threat to information security (Eddy, 2010).  The top most machines that need to be provided with maximum security include the mobile systems, the laptops, as well as more attention being required in information integrity checks, information confidentiality and prevention of data loss.

Most of the IT professionals confessed that the industry will have to spend more in the management of data security because their Microsoft operating systems were more vulnerable.  The web browsers were 24 vulnerable, the Twitter and the facebook were 23 at risk while the Adobe Flash was 24 vulnerable to the attack of these threats. Other vulnerable sites include the media down load (32), the used P2P networking (25) and the Web mail accounts (25). Thus the data shows that a quota of the companies is at risk in terms of information security and data management requirements are on a rise.  However, it is encouraging that most of the companies are have already installed data security software that can detect the threats as they enter the machine and can therefore ensure some security for their data.  Some of the installed software includes the spyware (57), the phishing (47), the SQL website injections (32) and virus protection taking the elephants share of protection (60).

Majority of the IT managers have set security measures guarding the employee use of the internet thus limiting the access to the social sites hence the overall effect of reducing the possibility of the spread of the threats through their machines.  The employers here admitted that they had to limit access to the social sites like the facebook and chatting unless it was necessary among the employees.  This is one of the commonly used security measure towards the spread of attack by the virus and malware among the majority of the SMBs companies.

The managers insisted that they would send occasional reminders to their employees as well as warn the new recruits against some social sites. Another way the companies are able to counter attack by such viruses would be  by ensuring that employees keep the latest versions of the malware vectors to assist in the detection and removal of the threats before attacking the system. Some software companies are working hard day in day out to generate programs that can counter the threats as they appear in the internet (Ever, 2006). Some of these programs can perform fast checks of the threats, and heal them at once restoring the machine working capability instantly.  However this is an extra cost to the IT firms.

Apart form the threats attacking the machines and software in an office, many critics argue against the Web 2.0 as means of wasting time in office while employees chat.  Logging to the facebook and other social sites in a work place are punishable offenses that may even result to loss of ones job.  Most financial institutions in the use firewall to deter employees from watching or visiting social sites. For example in British Gas, the Lloyds TSB in the U.K are good users of the protective firewall software.

In some institutions of finances, employees that are on facebook are not recruited to avoid the havoc of running after them every time. Elsewhere, in Barracuda Networks which is one of the major companies dealing with security software development, it has confirmed that their company employs the use of Web Filters to curb the problem of threats resulting from the use of facebook and Myspace sites.  The web filters bar the accessibility and use of the sites.

Many IT administrators declare that business managers that seek to have their lap tops connected to the company networks are at a great risk of losing their data through the web 2.0 programs. This is why managers are encouraged to avoid the use of open software platforms they are advice to use appropriate security software or web filters to deter their employees from accessing secured data through the web 2.0 connectivity as well as the social software invested threat zone.  The increased trend of threat infections to the IT firms as been seen as the main reason as to why most business leaders are against the use of the web 2.0. It is however inevitable that most security software firm are coming up to counter the threats that are being encountered in the information technology world.  This is a negative trend to the business managers who must spend more on data security (Fraser, Dutta, 2009).  It is rather inevitable that business leaders will have to spend more if they are to maintain data integrity, maintain the value and validity of the data over fairly long time.  As much as the managers oppose the entry of the web 2.0 in the IT, they will very soon be mandated to use it since it has the best network connectivity and efficiencies in dissemination of information with the best possible interactive sites favorable not only to the employees but the managers.

The Google tool that is found on the web 2.0 is one of the best tools that  is used to gain more knowledge on the world news as well as other company management strategies that may help the others managers to improve on their productivity.  The music and fun found on the web 2.0 is a motivating factor to most office workers because they concentrate more and are rarely bored by office work. Most of the employees find it fulfilling since they claim that they are now doing more work per unit time than they used to do without the social sites in the computer net works.

In conclusion, the web 2.0 is a good product that is worth embracing in different types of firms because the productivity of workers and the managers is increased.  However, the site needs to be installed and used with care thus data security measures are needed.  The web filter mechanism can be applied as a security measure.  The use of updated security software is the best approach towards data security.

Tidal Power as a Source of Renewable Energy for the UK

The gravitational force of the moon causes the waters of the earths oceans and seas to bulge along the axis directly pointed at the moon (Tidal Power 2010). These forces, coupled with centripetal and centrifugal forces resulting the earths rotation cause the rise and fall of oceanic tides. Tides are highest (spring tides) when the moon and the sun are in line thus pulling earths oceanic waters to one direction and lower (neap tides) when the moon and the sun describe a perpendicular axis centred on the earth (Tidal Power 2010).

Figure 1 Tidal range as affected by the moon and the sun (Currie et al 2002)
One lunar cycle takes approximately 4 weeks and the earth rotates about its axis once every 24 hours (Tidal Power, 2010). This causes a tidal cycle approximately every 12.5 hours. At a time when the world is striving to undergo a green revolution where renewable sources of energy that cause minimal damage to the environment are the future, the predictability of the tidal cycle makes this natural phenomenon a very potential source of renewable energy.

Potential Tidal Power Sites in the UK
The United Kingdom has many potential sites for the generation of tidal power. River Severn between Wales and England is very suitable for a barrage as is the Sound of Islay and Pentlands Firth in Scotland and Pembrokeshire (Sustainable Development Commission 2007). The UK, according to DUKES, had a total electricity generation of 385 Terra-Watt hours (TWh) equivalent to 43.9 Giga watts of electrical output (GWe). The total tidal power generation capacity of all tidal barrages, tidal streams and estuaries having at least a bank on English shores 5.57 GWe, meaning that the UK can source up to 13 percent of its total electricity requirements from the harnessing tidal energy (Smith 2010).

The table below is a summary of the potential tidal power sites and their projected capacity.
SiteProjected Electrical Output (TWh)Severn 25.00Solway   9.66Morecambe Bay   5.98Wash   3.70Humber   1.65Thames  1.37Dee  0.89Mersey  0.57Total48.82
Table 1 Tidal site capacities in the UK (Smith 2010)

Design and Technology Considerations
To harness tidal power, current technologies allow either the construction of a tidal barrage or utilisation of tidal streams. A barrage is an installation at a bay or an estuary that lets water flow through it as the tide rises (Tidal Power 2010). When the tide stops, gates are closed, effectively damming water in the basin behind the barrage bearing a hydrostatic head. This water can then be delivered through these gates to drive turbines, generating electricity.

The diagram below is an illustration of a simplified tidal barrage.

 Figure 2 Simplified tidal barrage (Currie et al 2002)
There are various turbine designs available for use in barrage power generation. In a bulb turbine, water flow is around the turbine. The disadvantage of this design is that maintenance requires water flow to be stopped, causing time delays and loss in generation. In a rim turbine, the alternator is connected perpendicular to the waterway foe easier access and maintenance (Tidal Power 2010). The disadvantage with this arrangement is difficulty in power generation regulation. Tubular turbines are most recommended for the UKs greatest potential tidal site, the Severn Estuary (Sustainable Development Commission 2007). In a tubular turbine, the blades are coupled through an elongated shaft and oriented at an angle in such a way that the generator is at the top of the barrage.

(a)

(b)

(c)
Figure 3 (a) a bulb turbine, (b) a rim turbine, and (c) a tubular turbine. (Currie et al 2002)

The implementation of tidal generation plants in the UK has been slow because of the high initial costs involved and lack of technologies that do less harm to the flourishing marine ecosystems around estuaries and lagoons (Smith 2010). Feasibility studies and further research of employing tidal streams in the deeper seas should be carried out to tap this source of green energy.

Power Available From a Barrage
Figure 4 Diagrammatic representation of a barrage (Currie et al 2002)
If  is density of seawater (kgm3), g the constant of gravity, Cd the barrages discharge coefficient and A is the approximate area of the basin (m2), then at any instant the power derivable from the turbine is given by

Z1 and Z2 are the levels of the water in the sea and the basin respectively.

Economic Factors
Barrage construction requires large investment capital. Private investors are reluctant to take up such projects since the payback period is long. The UK government and should therefore either directly invest or attract able long-term investors to tap this energy source (Sustainable Development Commission 2010). After initial installation, maintenance is minimal and a turbine may function for over 30 years.

The cost-effectiveness of tidal power stations is determined by size of barrage and height difference between the low and high tide (Tidal Power 2010). The feasibility of tidal generators is directly proportional to the ratio of the barrage length to its annual generation in Kilo-Watt hours. This factor is behind the initiation of the Swansea, Fifoots Point and the North Wales tidal generation stations in Wales since tidal ranges are high.

Environmental and Social Impacts of Tidal Power Generation
Tidal energy is renewable, providing electricity without emitting greenhouse gases or any toxic by-products (Sustainable Development Commission 2007). If it were efficiently harnessed, tidal energy would reduce reliance on nuclear generators which cause thermal pollution and radioactive radiation. However, it there is a risk of disrupting the marine and shoreline ecosystems.

Damming bays or estuaries could also affect the geography of the shoreline, affecting recreational activities, fishing and shipping (Sustainable Development Commission 2007). However, construction of tidal power generating station should take the example of the La Rance barrage that has been operational in France since 1966 causing negligible disruption to the ecosystem and recreation activities.

Potential for Reducing Carbon Emissions
The Severn Barrage, if completed, is estimated to save 18 million tons of coal every year. If other feasible projects are built, then the levels of carbon emissions could be reduced significantly (Sustainable Development Commission 2007). At a time when climate change and global warming are posing a serious threat to world ecosystems and the survival of humanity, tidal energy should be appreciated as a way of powering homes and industries while decreasing carbon emissions to the atmosphere.

Conclusion
An increasing demand for energy has brought about environmental degradation due to the use of fossil fuels that release carbon into the atmosphere. Mining in itself pollutes the environment and non-renewable sources of energy are on their way to exhaustion. This is a call to the UK and the world to invest in renewable energy. The UK is endowed with enough natural resources to develop tidal energy into a major contributor of its electricity requirements (Smith 2010). Capital should therefore be raised to realise this potential as it offers the opportunity of a more sustainable future of the energy sector that conserves the environment.

IS at Newark Opthalmic Centre

Task 1
IS Strategy for the Centre The centre is on a growth trajectory and it appears that Dr. John Harrison has been able to convince his patrons about the sincerity in the healthcare services being provided by his centre. Healthcare services, in general are a very crucial and integral part of our lives. With the prevailing competition in almost all fields, healthcare is no exception and requires bracing itself for competing in such a scenario. Hamel and Prahalad (1996) stated, The competition between firms is as much a race for competency mastery as it is for market position and market power. Therefore, providing effective and efficient healthcare services to the people becomes an issue when the healthcare centre has to take care of competition. Information Systems strategy in general will have to be devised according to the level of prevailing competition around and other related factors. Porters Five Forces Model helps in understanding the influencing factors and accordingly planning the strategies.

Porters Five Forces for the Health care services sector

It is therefore quite clear that apart from the threat of competitive rivalry the bargaining power of customers can make the maximum difference as far as growth and survival in the healthcare sector is concerned. Therefore, for a long term survival the healthcare centre is required to invest in information systems in order to strengthen the Customer Relationship Management (CRM) aspect and improve customer engagement and communication.

Putting into practice an effective IS strategy will not only help the patients, but it will also help the centre in procuring supplies, arranging reports, stocking medicines, disposing off files etc. with more efficiency and accuracy. This will in turn help in increasing the level of customer satisfaction. According to Kotler (1974), an individuals beliefs or conceptions about what is desirable, good or bad  forms the value system. In case of a healthcare centre such belief is of significant importance. While talking about competition and growth potential, Innovation, excellence, customer satisfaction and value go hand in hand. The case study suggests that

The centre has seen substantial growth over the years, with increase in workload as well as the staff strength
So far, the centre is only having couple of stand-alone systems, that too only to take care of the account books and some other clerical work.
Records of more than 8,000 patients are being maintained in files in physical form, with over 3,500 annual visits.
An excel sheet is the only IT tool being used for scheduling the appointments and reminding the patients about their schedules.

The centre has experienced serious accounts receivable problems, resulting in non-disbursal of salaries at times. Correcting this cash flow issue is bound to grow out of proportions with increase in the overall business of the centre.

Maintaining the records of patients is proving to be an arduous task
Centre is not able to follow up the cases handled by it. Following up an existing patient customer proves to be an important element of converting the person into a loyal one and establishing brand equity. Citing the dominance of Japanese companies in world trade, Hamel and Prahalad (1994) underline the crucial role brand identity plays in winning the approval of customers and taking on the competition.

As of now, the centre doesnt seem to be prepared for investing huge amounts in IT, therefore, Collington can start with those areas where the advantages would be most visible in the beginning itself. While emphasizing the need for IT, Carr (2003) stated that, The stress should be on the I rather than the T in IT. Therefore in order to have a successful implementation of the IT, some the critical factors which could prove to be the key for successful implementation of the new system include
Hassle free registration of patients
Easier tracking of the patient records
Error free billing and payment system
Therefore under the circumstances, it is suggested that
There is an urgent need to train the workforce at the centre. It needs to be emphasized here that the workforce might be having some reservations about implementing the ISM. Such reservations need to be taken care of in the beginning itself by assuring each and every one that implementing IS will in no way adversely impact their job prospects. Hammer and Champy (1993), have carried out extensive research on the process of reengineering and contend that in order to get worthy results the corporation will have to seek constructive cooperation from the workforce and work out a plan such that the reengineering process doesnt just remain a process change, instead it is able to work out a change in rules as well.

It is certainly not a good strategy to procure the equipment before deciding about their actual usage. In this case, Collington has procured the equipment without deciding about their exact usage. This in turn is bound to invite angry outbursts from Dr. John Harrison.

Collington must take inputs from each staff member, ask them about their difficulties and try to suggest appropriate solution with IT tools. This will also help them in actively participating in the entire process of IS implementation. In addition, if Dr. Harrison will come to know about the prevailing difficulties at the centre, he is bound to view the IS implementation more sympathetically. It is pertinent to note here that theres no off the shelf readymade solution concerning various organizations, as pointed out by Easterby et al (2003) that organizational learning is an evolutionary process and varies depending upon situational factors.

In the beginning, Collington must identify key areas, where the IT implementation will have maximum impact. Underscoring the need for strategic use of Information by an organization, Ward and Peppard (2002) suggest that products and services enhanced with new features using effective integration and executive management leads to strategic advantages for the organization. Some such areas in this case are

Maintaining the records of patients, so that appropriate follow up can be done
Scheduling the appointments of the patients
Maintaining the account books.
Some of the fields where IT tools can be implemented at a later stage include, maintaining the records of employees, procurement and inventories etc.

The staff must undergo training in two phases. While the familiarization training must take place prior to the actual implementation of the process. Another training on the lines of on the job training must also be planned out well in advance. Burgio  Burgio (1990) has cited a number of research studies in arriving at the conclusion that well planned training and development activities leave a positive impact on the morale of the healthcare professionals like nursing assistants, which subsequently gets reflected in the services offered by these professionals to their patients.

There must be a feedback mechanism in place to assess the implementation process and its acceptance by the workers. Regular feedback from the staff will not only provide an opportunity to the staff to express their dissatisfaction or grievance (if any), but it will also help the management in devising and implementing newer schemes towards taking better care of the patients.

Task-2
IT Strategy Procuring the requisite IT tools in order to bring about comprehensive changes into the functioning of the centre forms the basis of IS strategy. In order to work out the best possible IT solution for the company, and optimum pay off for the IT investments, the IS strategy is required to be aligned in line with the overall business strategy of the company. OECD (2008) carried out a study to assess the extent of convergence prevailing in different segments of our society. It was indicated in the study that IT implementation and high speed networks are assisting in resolving the concerns of the society in fields like healthcare and education. Therefore, the Newark Opthalmic Centre would be able to bring about better efficiency and tighter controls if the workforce is able to handle the basic minimum tool and equipment required for the purpose. Keen (1981) contends that information system helps in coupling devices which coordinate planning and improve management control. It is quite evident that while on the one hand the business manager, James Collington is in favor of taking help from the available Information System tools,  Dr John Harrison is skeptical about the success of such a venture. Therefore, it needs to be emphasized that at least a bare minimum set of critically required IT equipment and tools be procured in the first batch. Once its effectiveness is established, then the healthcare centre can go ahead for more. The centre will have to compare the external business environment with its internal business environment in order to prepare the current application portfolio. Some of the important aspects of this strategy are
Procurement of computers with such configuration that it not only handles the futuristic work load of the centre, but there is enough memory to digitize the existing records of the centre.
Together with the computers, the centre also needs to have printers at the reception area, and the accounts department.

Standalone computers at best serve the purpose of dumb storage devices. Therefore, in order to fully utilize the intelligence potential of the new computers, networking amongst computers is also of crucial importance. This requires laying out proper cable and routers.

While the workforce has been assigned with different types of routine tasks, James Collington will have to make sure the historical records are uploaded on the computers in shortest possible time. This might involve, assigning special duties to the best computer literate at the centre or hiring the services of some expert persons for short durations. In todays context, Currie (2000) underlines that outsourcing is one important forms of IT sourcing, therefore, the centre can outsource the upload of historical records to some outside agency.

There must be memory space to take back up of the uploaded data. This is of crucial importance particularly when a new system is being tested and implemented. Preferably, there must be a portable standalone drive or computer, which can be exclusively used as a back-up.

Dependable software is the backbone and nerve centre of any IT system. Therefore, the centre also needs to procure licensed versions of compatible software.

Task-3
Marketing the IS Strategy to the staff Keen (1981) comments that resistance to change and efforts for protection of vested interests arise mainly out of a sense of suppression and insecurity. Therefore, such efforts which help in alleviating such fears of the staff will help James Collington in effectively marketing the Information System strategy to the staff. This is of importance particularly because, implementation of IS will result in a number of related changes in the existing rules and regulations. Thus Yoo et el (2006) agreed that the principle of principles of management must also be flexible and adaptable to every change and need. So, in order to market the change to the staff Collington will have to prepare a strategy such that, while on the one hand, it doesnt create the sense of suppression and on the other hand it doesnt prove to be a huge burden on the existing resources. A Marketing plan is a way of achieving something within the available resources and time limits. Such a plan basically involves, setting objectives and selecting strategies accordingly. It is worth emphasizing here that if Collington is able to effectively market the IS strategy to its staff, then he would be easily able to pursue Dr. Harrison for further improvements on the system. Some of the steps which are required to be taken to market the IS strategy are

Prepare some leaflets highlighting the advantages of such system in other health care centers.
Preparing a comparison chart of what could be possible levels of efficiencies that could be brought about by each one of the staff members, if the IS straregy is fully implemented.

Assuring the workforce about their job security. At times, this helps because, staff tends to think that IS strategy might result in redundancies.

The process of implementation must also be planned out in a phased manner, to avoid sudden changes, which might result in unforeseen outcomes. Smith (2008) stated that in todays context the management needs to make use of democratic process to win the trust of staff, customers, suppliers and a host of other stakeholders.

The staff must be motivated to go for the changes, by announcing some encouragement schemes. Quite often the lack of any positive motivation by the managers often results in an automatic provisioning of negative motivation. How somebody makes use of his her efforts towards a goal is determined by the motivating de-motivating factors. The encouragement can be in the form of financial benefits or some gifts. Such a scheme will surely be there for a short period of time, till everyone is familiarized with the new system.

In addition we need to plan out schemes for patients as well. If the staff finds that theres indeed an overall increase in the satisfaction level and turnover of the patients, then it will also help in managing their disinterest. While devising schemes for the customers, it must be kept in mind that, a long-term relationship between the consumer and the care-giver. In fact in these competitive times, quite often companies offer good bargains to consumers. Consumer gets preferential treatment from the care-giver and Consumer gets high quality healthcare at affordable prices.

Task-4
Marketing IS to Dr Harrison
Marketing communicating has acquired a centre stage over the years. How well a company is able to communicate the features of its products or services holds the key towards the overall marketing success of the company. Marketing efforts as such can take different forms depending upon the product, the services, the market segment, and the customer profile. In this case, the marketing manager is James Collington and Dr. Harrison is the key person who requires to be convinced about the usefulness of the new system being set up in the centre. Therefore, effectively marketing the IS to Dr. Harrison holds key for its survival in the long run. Some of the steps that would help in marketing the project to Dr Harrison are

If James Collington is able to effectively market the strategy to the staff, and prove to Dr Harrison that he has the leadership qualities to take on board everyone amongst the staff, this will convince Dr Harrison about the genuineness of the efforts. Reithel and Finchs (2007) suggested that personality characteristics of an individual play an important role in developing the leadership skills.

Dr Harrison is quite sensitive in terms of the cost and benefits for the new system being installed. Therefore, James Collington will have to prepare a cost benefit chart indicating the monetary as well as non-monetary advantages that the centre will have after the implementation of the new steps. For example, it is worth emphasizing that goodwill earned as a result of efficient services becomes a great asset in turning customers into loyal and regular customers. Loyal customers are in turn considered as brand ambassadors for the product or service. It needs to be emphasized to Dr Harrison that Intangible benefits like goodwill and brand equity can only be strengthened with a pro-active attitude from the employees and management. Examples of well established hospitals and other companies, indicating the amount of investment being done by such companies in establishing brand equity will prove to be an effective marketing tool for convincing Dr Harrison. Keller (2002) states that a brand finds it easy to have a positive impact, when the customer react more favorably to the product or service.

A list of advantages for existing as well as prospective customers once the new IT system is implemented can be prepared. This will help in shaping the viewpoint of Dr Harrison.
A formal SWOT analysis of Newark Opthalmic Centre can also be carried out by James Collington, with particular emphasis on the weaker points and the types of opportunities that the centre can explore. It should be emphasized that a timely action would not only help in converting opportunities into strengths but, it will also help in finding solutions to some of the weaknesses of the centre.

It needs to be emphasized that once the employees feel satisfied in working for the hospital, there will be lesser number of employees leaving the hospital. This will help Newark Opthalmic Centre in gaining experienced workforce while providing reliable services to its customers patients.
While preparing the presentation showcasing the advantages of the new system, it is equally pertinent to note here that James Collignton must not depict too many holes in the existing set up at the center, because, that might result in negative depiction of the existing system. Instead, comparative advantages of the new system must be highlighted.

Task-5
Implementation plan that can be presented to Dr Harrison There are several dimensions which need to be taken care of while preparing the implementation plan. Keeping in mind such dimensions, the implementation plan can be prepared by bringing to the knowledge of Dr Harrison the following points

Newark Opthalmic Centre prepares a plan for routine check-ups for the patients. The MIS will come out with a list of such information which can prove helpful in reminding the customer about hisher date with the doctor. Such information will also help the centre to put in place the requisite equipment for the test to be done in minimum possible time. This will certainly make the customer happy while the center will also be able to cater to more number of patients in the allotted time.

A comprehensive list of comparisons about some of the competitive healthcare centers, indicating the differences before and after the implementation of IT strategies with such centers will certainly help Dr. Harrison in making up his mind.

A presentation can be prepared with inputs from different sections like the accounts, record keeping, reception, scheduling etc. Emphasis must be given in the presentation on the differences that the new system will make in terms of effort, time and cost saving.

Some capital amount has been invested for implementing the new information system. Therefore, James Collington must come out with data and statistical information about the possible annualized rates of return on this particular investment, by taking into account a number of prevailing factors and the prevalent market conditions. This will of course amount to futuristic projections, but such projections will also help the healthcare centre in keeping a check on the operations and the operating revenues.

Using computers and IT system for storage of information will also help in reducing the dependence on the use of paper, which in turn implies that the centre will be able to reduce its dependence on the huge piles of files being kept for maintaining the records of the patients.

Sometimes, law enforcement agencies or legal firms require the data about some patients, and their history of ailments. In such cases, maintaining patient records in a proper manner, so that the records can be extracted at short notices, will lead to better efficiency and good rapport with such agencies.
In these competitive times, retaining a customer proves to be of great advantage in cases where, there are other similar service providers within the region. With the help of a computerized database and scheduler, the centre will be able to know beforehand about the coming 2-3 days schedules. So even in case the patient forgets about hisher health check-ups, the healthcare reminds himher about it. This helps in shaping long standing relationship with the customers.

If Newark Opthalmic Centre is able to cater to a substantial market, then in times to come, it can enter into alliances with other test centers or healthcare centers in adjoining cities or regions. Such strategic alliances will help the centre in shaping its footprints across such diverse markets, which will assist the company during expansion and diversification. With such steps Newark Opthalmic Centre would be able to enter into the managed healthcare sector as well. At times, in situations when a patient consumer is on tour to places outside the native place, then the tie-up between caregiver companies helps the patientconsumer in getting a good care from some affiliated company as well. Such developments are bound to translate into good opportunity for the centre.

The final implementation plan must include the following details in a concise manner
Dates and period of first phase of training
Batches and schedules of staff members for undergoing training (note all members of the staff cannot be sent for training simultaneously).

Dates for installing the new systems (note the replacement of the manual system with the IS cannot be implemented simultaneously)
Exact date and time for changeover from manual to computer. For example, the reception area will be the most suitable one for this changeover.

A time period must be decided in advance during which the new system as well as the old (manual) system must work in parallel. This will help in case of a technical glitch during the early days. This time period can be about a week to a month.

Order placement for carrying out cabling work for setting up intra-network within the premises of the healthcare centre.

 Daily weekly monthly scheduled meetings to take stock of the implementation process.

LANWAN security of database in cloud computing

This paper will discuss the issues of security in local area networks (LANs) and wide area networks (WANs). The security of a database is very important and the achievement of which is imperative for any company which stores their data on a cloud computing environment databases are the assets that say much of the company. They are the most valuable assets of the company and their security is very important to the company. There is a particular benefit of cloud computing whereby training will be done only once and subsequent training will be done by the cloud computing providers.  These same users are provided with increased capability so that it is no longer necessary to train them new software is applications. Most of the information which is stored in the cloud is large databases which make security a very important factor in the design of the LANWAN security infrastructure. Security of the database will have to be seen in the context of confidentiality, integrity and availability, for example maintaining the status of the data at all times that is when they are in the cloud and when they are within the premises of the company. The work of a database administrator in cloud computing environments involves the guarding of the database from frauds such as privacy breach and unauthorized access from people who are not authorized to get access to the data. The current trends in information technology put security of data and information at stake. There are two issues that should be considered when dealing with data in a LANWAN cloud computing environment. There is the first issue of the security of the data while they are in the cloud and the issue of security of data while they are in transit to the cloud computing environment.

Cloud computing is a buzz in the information technology industry. There has been effort by companies and individuals to store databases in the cloud but the big questions is whether there is enough mechanisms to assure their safety while they are stored in the cloud. the point of contentions here is that there are few companies which provide could computing services and they are the same companies who have to make sure that the security of what they store in their clouds is assured for their clients. Many researchers and technologists argue that there should be third party companies who should be concerned with security of information that are stored in the cloud. The third party players will become the natural byproduct of the new technology of cloud computing. Cloud computing providers are totally responsible for what is taking place in their clouds including the security of the data. Clients have had to communicate with these providers very often to make sure that all is well with their data in the cloud. This paper will look at the various instances where the security of databases can be achieved in a cloud computing environment.

Chapter one
1.1 Introduction
Cloud computing knowledge is important for one to understand the strategy that needs to be taken when coming up with security implementation in a LANWAN cloud environment.  This technology involves sending services which have been hosted through the Internet. Cloud computing would be useless without the technologies like Web 2.0, Software as a Service (SaaS), and Data as a Service (DaaS). These technologies have made cloud computing a success (Cleveland, 2009). The main aim of cloud computing is to change the paradigm to another shift so that computing could get a new meaning information technology moved from desktop applications to getting services which are hosted in the cloud. As the world delves into cloud computing, there are security issues which have become a point of concern form many.

Cloud computing involve the use of the Internet to get services that were initially got from desktop computers LANWAN security of cloud computing is paramount to the storage of databases in the cloud. This technology of cloud computing allows businesses and consumers to get access to applications without the need of installing them on their own on-site servers. One of the ways of using the application in a cloud computing environment is by installing it in a remote server. The applications are installed on remote servers. In the normal way of applications use, consumers purchase licenses for application software from their software provider and install them on their on-site servers. In cloud computing case, it is On Demand basis where consumers pay a subscription fee for the service. The use of this technology increases efficiency because the storage, memory and processing are centralized.

There are three types of cloud computing. The first is the Software as a service (SaaS). This is the cloud computing type where applications are installed on the remote servers and then offered to the consumer as a service. A single instance of the application software is set to run on the cloud and serves multiple users or organizations. In the traditional software use, users would purchase the license and install them on their on-site servers. With cloud computing, the end-user will pay for the service he uses.

The second type of cloud computing is the Platform as a service (PaaS). Most companies started by providing SaaS to the end-users. This is where the cloud computing and the LANWAN cloud computing storage of data all started. Most of these companies have also started developing platform services for the end-users. In PaaS, products are provided to enable applications to be deployed on the end-users. Platforms act as an interface to enable users to get access to applications which are provided by partners or by customers. Some of the big players in this cloud computing industry like Microsoft, Google and Amazon have provided platforms which include Windows Live for Microsoft, Apps Engine for Google and EC2 for Amazon. Providers like Amazon, Google, and Microsoft have developed platforms which enable users to gain access to applications stored on centralized servers.

Chapter two
2.1 Literature review
McIvor and Schulze (2008) discuss the limitations that were experienced with grid computing. One of the problems of grid computing is the fact that it exposes too much detail of the underlying implementation thus making interoperability more complex and scaling almost impossible (McEvoy,  Schulze, 2008). Instead of this being a flaw it became one feature of grid computing. When someone is looking for solution at a more abstract and higher level, that is where cloud computing becomes handy and plays a big role.

Jha, Merzky,  Fox also give a descripti9on of clouds as providing higher level abstraction through which services are delivered to the customer. It is widely agreed that the difference between the cloud and grid is the complexity of the interface through which the services are delivered to the customer and the extent to which the resources underlying are exposed. With cloud computing, the interfaces of higher-level cloud restrict the services to off-the-shelf software, which are deployed as a generic shared platform (Jha, Merzky,  Fox).

There are many papers and proceedings which discuss SaaS, cloud computing, virtualization, and grid computing. Several of the most useful references are summarized in this section. The references for both the support and conflicts of the various definitions are all included.

The have been various views about the cloud model. Some authors have argued that cloud computing model incorporates popular trends such as Web 2.0 SaaS, and DaaS. The main aim of all these revolutions is so that we may change the way we compute and shift absolutely from desktop based computing to services and resources which are hosted in the cloud.

There have been other explanations about cloud computing that gives the distinction between cloud services and cloud computing. He argues that a cloud service is any business or consumer service that is consumed and delivered over the Internet in real-time. Cloud computing depends on the network, LANWAN, which makes it worth researching. The cloud services are accessed by the clients using the Internet. What the users need to have is Internet connection.

Another definition of cloud computing is that it is a style of computing where large and scalable information technology activities are provided as a service using Internet technologies to external customers. Cloud computing are characterized by their self-service nature where users customers acquire resources any time they wish to use these services as long they have an Internet connection and can get rid of these services when they are no longer interested in these services.

A cloud computing system is the environment where the consumption of cloud services is enables and made possible. Cloud computing is a new way where capacity is increased, capabilities added and functionalities exploited without the need to add any infrastructure to the system, train new skills or acquisition of a new software license. In this new setup, the services can be categorized into concepts depending on the needs of the consumer. These categories include Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), managed service providers (MSP), and utility computing which deals with products, services, and solutions that are consumed over the Internet real-time. The users of cloud computing do not possess any infrastructure of the system because there is no initial investment in serves or software licenses. They instead use the resources as a service and pay for the use of these resources which are supplied by the service provider. In this case, most cloud computing providers have options which feature computing items which range from lower-powered system units to units which require extensive multicore CPU resources.

Conceptual framework
The theory that this paper will come up with is the fact that computing, with the cloud computing, is effective when centralized. This is because the entire buzz in this area of cloud goes back to centralization. The recent developments in computing shoes a very interesting fact that computing is shifting back to centralized services just the ones we had in the 20th century. We can thus say that the pendulum is swinging back to its original place. The theory behind all these is the fact that computing is going back to the old days of centralized infrastructure. It is therefore worth noting that computing is more efficient when they are centralized.  In my own view, there is the development of computing basing on virtualization. This is because virtualization has been the main pillar in the coming of cloud computing. All the concepts of cloud computing have all originated from virtualization technologies. A brief overview of virtualization shows that cloud computing is in itself a subset of cloud computing.

Berry et al (2005) indicates that the concept of virtual machines started to be in existence since 1960s the time when IBM first developed the act of concurrent and interactive access to a mainframe computer. Each individual virtual machine used to give users the simulation of the real physical machine thus giving them the services that could have been there if they were accessing the machine directly. This gave way to a good, elegant way of sharing resources and time. This also gave way to reduction of costs on the ever-soaring costs of hardware. Each of the virtual machine was fully protected so that each was a separate copy in the underlying operating system. Users could run, and execute applications concurrently without fearing the occurrence of crush in the system. This technology was therefore used to reduce the cost of acquiring new hardware and at the same time improving productivity because users could work at the same time on the same machine.

There has been the practice of this technology in storage devices whereby they have been divided into partitions. A partition is a division which is logical done on the hard disk drive to simulate the effect of two separate hard disks.

The act of virtualization in operating system is where there is the use of software to enable a piece of hardware to run more than one operating system images simultaneously. This technology got its boost from mainframes ten years ago which allowed administrators to bring to an end a waste of expensive processing power.

Virtualization software was adopted at a very fast rate than ever imagined. Even Information Technology experts embraced this technology. Virtualization has been applied in three areas of Information Technology. These areas include networking, storage and servers. Network virtualization is the method of combining the available resources in a network and by splitting the available bandwidth into several channels each of these channels is independent of each other and can be assigned to a particular server or device in real time. The main idea behind network virtualization is so that the network can be divided into different manageable parts.

Storage virtualization is the act of pooling physical storage from multiple network storage devices so that there is a simulation of a single storage on the network which can be managed centrally. This technology is what has been popularly known as storage area networks (SANs).

Server virtualization is the masking of resources that is used by the server which include the number of individual users on the servers and the processors in the servers from the server users. The main aim of server virtualization is so that the user is spared having to understand and manage the complex details of the server resources while striving to increase sharing of resources and utilizing the capacity so that it can be expanded at a later time.

The technology of virtualization can be seen as a subset of the overall trend in information technology where it includes autonomic computing which is a scenario where the environment for information technology can manage itself based on perceived activity and utility computing which is where computer processing power is a utility where clients can pay only as needed. The main aim of virtualization is so that administrative tasks are centralized and improvement of scalability and work-overload is achieved.

From this computing trend, it is clear that computing is headed in developing more and more virtual hardware so that the real hardware is not seen per se but their work and presence is tremendous. This explains the reason we have virtual partition drives in computer hardware, the presence of grid computing.

There are many papers and proceedings which discuss SaaS, cloud computing, virtualization, and grid computing. Several of the most useful references are summarized in this section. The references for both the support and conflicts of the various definitions are all included.

The have been various views about the cloud model. Some authors have argued that cloud computing model incorporates popular trends such as Web 2.0 SaaS, and DaaS. The main aim of all these revolutions is so that we may change the way we compute and shift absolutely from desktop based computing to services and resources which are hosted in the cloud.

There have been other explanations about cloud computing that gives the distinction between cloud services and cloud computing. He argues that a cloud service is any business or consumer service that is consumed and delivered over the Internet in real-time. Cloud computing on the other hand consists of a full information technology environment which consists of all the components of network products that make the delivery of cloud services a reality. This is what enables cloud services to be performed.

Another definition of cloud computing is that it is a style of computing where large and scalable information technology activities are provided as a service using Internet technologies to external customers. Cloud computing are characterized by their self-service nature where users customers acquire resources any time they wish to use these services as long they have an Internet connection and can get rid of these services when they are no longer interested in these services.

A cloud computing system is the environment where the consumption of cloud services is enables and made possible. Cloud computing is a new way where capacity is increased, capabilities added and functionalities exploited without the need to add any infrastructure to the system, train new skills or acquisition of a new software license. In this new setup, the services can be categorized into concepts depending on the needs of the consumer. These categories include Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), managed service providers (MSP), and utility computing which deals with products, services, and solutions that are consumed over the Internet real-time. The users of cloud computing do not possess any infrastructure of the system because there is no initial investment in serves or software licenses. They instead use the resources as a service and pay for the use of these resources which are supplied by the service provider. In this case, most cloud computing providers have options which feature computing items which range from lower-powered system units to units which require extensive multicore CPU systems which require more resources for their operations.

2.2 Research Questions
This paper is based on the following research questions
What is the security level of the database which stored in the cloud How is the security of unauthorized access of the data and databases which are stored in the cloud There is much to desire for the control of data in the cloud. What is the responsibility of the corporate users in the cloud With all the hype that has been associated with cloud computing, there are issues which should be considered before any company shifts their paradigm and move completely to cloud computing (Dodani, 2009). There is the issue of lose of control for information. The end user will lose control of the information once they are sent to the cloud. Hackers are no longer teenagers who are idle but they have become expert IT professionals who have made hacking their full time mission. The issue of security cannot be left on remote managers and servers and expect that everything will be right. All the security is left to the third-party to ensure this. In theory, the data which is stored in the cloud is unusually safe and they are replicated across multiple machines. In case the data is lost, there is no local backup. This can be overcome if the users download all the documents which have been stored in the cloud to the desktop. This might be tedious to most users, the more reason why most of the users are still skeptical about moving to this new technology.

The second research question is to deal with performance of the database in the cloud. Cloud computing vendors may not meet the quality that is required for quality performance. There are standards that have been set for proper storage of data in any environment. With cloud computing vendors not regulated, it leaves a lot of security leaks which could be detrimental to the security and integrity of the data in the cloud (Dodani, 2009).

The third research question is about management of the database in the cloud. It is proving to be a difficult task to administer security and manage a virtualized corporate information technology environment. The management of the security in the cloud may be thwarted with the complications of the legalities that come with this. There are no tools available for the user to monitor the security level and manage cloud computing vendors and their products.

The forth question is about governance and regulatory compliance. With there are many questions that come in mind when outsourcing of services is raised in any organization. The processes of cloud computing, internal tools to be used by the buyers and third-party auditor process need to be addressed.

The fifth question is about finance of managing the security of the database in the cloud. Will the company be able to manage the management of the security in the cloud without adding any budgetary allocations to it If there are tools which will be developed by the buyers, will the tools be costly once they have been adopted by the company

Chapter three
3.1 Methodology
Due to few companies which have implemented cloud computing, it was hard getting a company to get the features and their views on cloud computing security, like how data will be secured while being transferred to the cloud, and how data will be secured while in the cloud. Most of the research were got from giant companies like Google, IBM and Microsoft, which have fully adopted cloud computing in their systems and have all the tools that are required to study this subject well. Most of the views were taken form renowned review sites, like CNET and ZDNET, which have a wide experience in getting company information and technology trends in the offing. These review sites have a balanced representation of the companies.

3.1.1Procedures
The results were from 5 companies which have widely dealt with cloud computing and are concerned with the security of the databases stored in the clouds. These companies have their representatives and program officers who were very helpful in this research. This is because they are the leading in the development and development of cloud computing. Data was collected using SPSS program. The views from the employees of these companies were captured in a questionnaire which was sent online. They were asked how they perceive their achievement of the database security in cloud computing. They were then asked of their personal feel of the extent to which the war against attacks in the cloud has been achieved.

In part 1, the respondents were told to give the things they feel are important when one is implementing cloud computing. They were told to give the issues they feel were paramount in implementing security in cloud computing environments. Participants were then asked a series of questions related to database security in cloud computing the type of cloud computing they wish to implement (attribute vs. consensus), the type of information source (personal vs. impersonal), type of heuristics (independent self-related vs. interdependent self-related), decision speed, consideration set, product involvement, and product knowledge. In part 2, participants were asked a series of individual differences in their technologies they wished to b implemented. In part 3, participants were presented with demographic questions such as age, gender, nationality, raceethnicity, and cultural identity. Once a participant finished the questionnaire, heshe was thanked and dismissed.

Shadow Program as a Technology Sales manager

A technology sales manager in a software industry is in charge of sales department and he or she is supposed to have a set target. The current vice president of sales in the software industry will not be available for one week and needs someone to shadow his position. The author of the paper is chosen to shadow the vice president and some qualifications are considered before the appointment is made.

Reasons or qualifications for the shadow program include firstly, the capacity to allocate and prioritize work. This is a basic requirement that will ensure that every sales representative is allocated enough sales work to promote the industry. As a technical sales manager, I am a good communicator in writing, in person and on the phone. A second attribute is good communication skills.

Communication is a very important element in sales career because it involves convincing people to buy your product. Good communication skills will help me to interact with a wide range of customers.

A third reason is that am enthusiastic, self motivated and ambitious sales person with the capacity to motivate other sales representatives in the software industry. My aim is to reach sales target that has been set by the industry. Fourthly, the knowledge about different software makes me to stand for the position of technical sales manager in place of the vice president. I have an interest of managing other people in the industry and the vice president finds me as the best choice for such position.

Benefits of the shadow program
As a technical sales manager, I hope for various benefits during that period of one week. First and paramount benefit of this position is to develop my skills as the sale manager. This will help me to venture into different fields of marketing where I will develop new relationships with companies, organizations and business entities. In addition, the desire to have more experience is another expectation from the software company.

Experience is the most important benefit of engaging in technical sales management position because it will help me to familiarize with different consultative sales methodology and other value based selling technicalities.  The importance of such position is to develop personal skills that will help me to get promoted in the industry. Within the period of one week, I would like to have excellent computer and project management skills. Problem solving is another area of interest that I hope to benefit in while stepping in for vice president of software industry. Through interaction with other senior managers and customers, I will develop the skill of handling problems in the industry.

Once the period of one week has expired, I should have gone through different experiences and events. This will help me to acquire knowledge about the sales position and have the capacity to offer important information to the vice president that will help him to promote the business. One important offer to the vice president is that, he should drive and facilitate sales in many parts of the country. The manager should expand the industry through opening other branches in different countries that will help to increase sales hence high profits. As a shadow employee, I would encourage the vice president to maintain strong client relationships already inexistence. This will help the business in the aspect of competitive advantage.  

SOCIAL AND ETHICAL ISSUES OF DATA MINING

OVERVIEW  WHAT IS DATA MINING
Sometimes referred to as knowledge discovery, data mining is the process of getting data from different angles and transforming it into useful information. Technically, you would say, data mining is the process of finding correlations and patterns among numerous fields in huge relational databases. Data mining is commonly used in practices like fraud detection, marketing, surveillance and scientific discovery. Even though data mining is somehow new term, the technology is not.

Organizations have used powerful machines to sift through large volumes of supermarket scanner data and analyze market research reports for a long time. However, continuous innovations in the processing power of a computer, the disk storage capacity, and statistical software are dramatically increasing the accuracy of analysis while driving down the cost.

DATA, INFORMATION AND KNOWLEDGE
Data
Data is a collection of facts and figures that can be processed by a computer (William Stallings, 2007). Data exists in huge amounts in different formats that include
Meta data  this is the data about the data itself such as logical database design and data dictionary definitions
Transactional data  sometimes called operational data, this includes data such as sales, inventory, cost, accounting and payroll
Nonoperational data  this includes data like forecast data, industry sales, macro economic data among others

Information
This is the knowledge conveyed concerning some specific fact, subject or event that of which one is apprised or told intelligence, news (Jill Dyche, 2000).

Knowledge
Information can be changed into knowledge about historical traditions and future trends (Bill Palace, 1996). For example, information on shop sales can be analyzed to provide knowledge of consumer behavior. Thus, a shopkeeper or manufacturer can determine which items are most susceptible to promotional efforts.

THE EFFECT OF DATA MINING AND ITS CAPABILITIES
Data mining is nowadays mainly used by companies with a strong consumer focus - retail, financial, communication, and marketing organizations. It helps these companies to determine relationships among internal factors such as price, product positioning, or staff skills, and external factors such as economic indicators, competition, and customer demographics. Bill Palace (1996) goes on to say that it also helps them to determine the impact on sales, satisfaction of the customer, and corporate profits. Lastly, it enables them to get into summary information to view detail transactional data.

With data mining, a retailer could use sales records of customer purchases to send targeted promotions based on an individuals purchase history. By mining demographic data from comment or warranty cards, the retailer could develop products and promotions to appeal to specific customer segments.

SOCIAL AND ETHICAL CONCERNS
Data mining when, used in a business context and applied to some type of personal data, it helps companies to build detailed customer profiles, and gain marketing intelligence (VAN WEL Lita  ROYAKKERSLambr, 2004). However, data mining is a big threat to some important ethical values like privacy and individuality. Data mining makes it hard for one to autonomously control the unveiling and dissemination of data about their private life. VAN WEL Lita  ROYAKKERSLambr (2004) go on to say that to study these threats, we distinguish between content and structure mining and usage mining. Web content and structure mining is a major cause for concern when data published on the web in a certain setting is mined and combined with other data for use in a totally different context. Web usage mining introduces privacy concerns when web users are tracked down, and then their activities are analyzed without them knowing. Furthermore, both types of web mining are always used to create customer files with a propensity of judging and treating people basing on group characteristics instead of on their own personal characteristics and worth (also known as de-individualization). Even though there are varying solutions to privacy-problems, none of them offers adequate protection. A combination of a solution package consisting of solutions both at an individual and collective level is the only thing that can contribute to reduce some of the conflict between the pros and cons of web mining. Privacy and individuality values ought to be given respect and be protected to ensure that people are judged and treated fairly.

In other scenarios, like as artificial neural networks, nearest neighbor classifiers, which dont make their knowledge explicit in rules, the use of controversial classification attributes may be hard to identify. Even with ways and methods that make their classification transparent, such as decision trees, there is not a lot to prevent an organization using rules based on controversial attributes if that improves the accuracy of the classification. Persons who suffer denial of credit or employment based on race, sex, ethnic background or other controversial attributes in a way where this is contrary to law are in a strong position to demonstrate harm only if they illustrate the artificial classifiers are using such attributes. The question is how they obtain access to the classifier results.

If in a way or another, the person loses money or reputation due to this, courts may award damages. Moreover, since the potential for inaccuracies involved in the exercise is huge, it is predictable that the courts might apply a higher than usual standard of care in considering whether a company has breached its duty to a plaintiff sufficiently to amount to negligence. Only time will tell.

THE FUTURE OF DATA MINING
Invasion of privacy is one problem that needs to be addressed as we continue to embrace data mining. A possible solution is the anonymising of personal data (Rick Sarre, 2007). This will at least provide privacy to data subjects. But then it would render data mining a bit powerless because of its dependence on identifiable data subjects. However, a compromise would be to empower individuals to dictate the type and amount of data they think is appropriate for an organization to mine.

CONCLUSION
We should have the ability to identify ethical dilemmas as they come up and look for solutions in a manner that is timely and concurrent. Most ethical and social issues have effects on each other and overlap. We therefore are supposed to identify any commonalities and differences that are present and exploit them to derive solutions that help us to uphold our ethical standards. We are in an environment that has fast changing technology with increasing social relevance. We therefore have to use the tools technology provides wisely considering our culture and future. As technology advances, we have to check and make sure that it does not interfere with our social and ethical values. Human integrity should always be upheld, no matter how important the technological inventions are.

Morgan Stanley Risk Assessment Report

Morgan Stanley (MS) recognizes that information is the most critical resource it has and providing its clients and shareholders with the most up to date information is a vital part of its strategic goals of maintaining its leadership as the best client brand with a global reach, strong capital base and financial holding status (FHC). This risk assessment report will assess MS controls measures against risks that can be exploited by internal and external threats in its interconnected web portals Ideas and ClienServ.  The U.S. federal government, Risk Management Guide for information system was used to conduct this risk assessment. The risk assessment revealed several weaknesses in MS IT and technical security which should be addressed by the management.

Purpose
The purpose of the risk assessment was to assess the capability of the MS control system to identify threats to its two web portals Ideas and ClientServ. The risk assessment will be qualitative process meant to identify mitigating factors to threats to the companys operation (BNAC, 2007). The web portals are classified as high risk systems, vulnerable to threats and attacks and are under constant scrutiny from compliance and legal departments (Birchall, Ezingeard, McFadzean, Howlin,  Yoxall, 2004).

Scope of the risk assessment
The web portals Ideas and ClientServ comprise of several key features. A user friendly graphical user interface (GUI), audit trail, administrative privileges and user groups. The ClientServ web portal is the external interface used by clients to review account activity and other banking transactions while Ideas is the internal web interface used by financial advisors to review clients account data and banking transactions. The Web portals are supported by BMC BladeLogic software that utilizes open architecture that allows in-house development of form based interfaces using Java which are then interfaced with MS IT enterprise infrastructure consisting of its network, computer plant and databases (Kralj-Taylor, 2009). The web portals Ideas and ClientServ are hosted by the IT department at its data centers in New York and Utah.  

The risk assessment report will also cover other supporting components of MS IT infrastructure which include MS network infrastructure, firewall, web application, databases and operating systems. This IT infrastructure if exposed could result in unlawful disclosure andor modification of data or in some cases access restriction to enterprise data to legitimate users in the event of a denial of service attack on the network (Nocco, 2006).

Risk Assessment Approach
The assessment guidelines set up by U.S. federal government for information technology systems were used in the risk assessment which concentrated mainly on security weaknesses that could affect MS reputation and standing in the industry in the event of a successful attack on its system as a result of loss data integrity and exposure of confidential data (National Institute of Standards and Technology, 2002). The assessment came up with important management, operational and technical controls recommendations and mitigation factors (Ferris, 2002).

Risk Assessment process
This segment of the risk assessment will detail the procedure that was used to process the risk assessment report. The procedure is divided into two stages the assessment and post assessment stages.

Assessment
This stage of the assessment reviewed publicly about MS. The information collected from these documentary sources helped to identify threats to there IT infrastructure (BNAC, 2007).  

Table 1 Techniques used
TechniqueDescriptionDocument reviewThe assessment reviewed MS security policies and information policy, enterprise and network infrastructure.Vulnerability sourcesThe assessment reviewed documentary information from several MS partners and IT vendors and security experts to identify potential weaknesses in IT system. The sources consulted included
British North American committee ( HYPERLINK httpwww.bnac.org www.bnac.org).
The NCC Group ( HYPERLINK httpwww.nccgroup.com www.nccgroup.com)
 IBM ( HYPERLINK httpwww.ibm.comgrid httpwww.ibm.comgrid) (IBM, 2003).

2.1.1.1. Risk Model
The risk following is the model used to determine risks to the web portals Ideas and ClientServ
Risk determination Threat likelihood x Magnitude of impact
Threat likelihood Several factors were used when determining the threat likelihood and its potential impact on MS enterprise infrastructure and reputation (Presidents Identify Theft Task Force, 2007). Some of these factors include
Existing control measures effectiveness
Nature of system weakness
Source of the threat and its capability and motivation
The following definitions were used in the assessment of threat likelihood (National Institute of Standards and Technology, 2002).

Table 2 Threat likelihood
Threat likelihood (Weight)Likelihood DefinitionHigh (1.0)The threat source has sufficient motivation and capability to penetrate and overcome controls and measures put in place to prevent attacks on the system. Moderate (0.5)The threat source has the motivation and capability, but controls and measures put in the system are an impediment to a successful attack.Low (0.1)The threat source lacks the motivation and capability to launch an attack on the system or system controls are a deterrent against an attempted attack on system weaknesses.

Table 3 Magnitude of Impact
Magnitude of Impact (Score)DefinitionHigh (100)The impact of loss of confidentiality, integrity and confidentiality could be expected to severely affect the operations and assets of the organization andor individual (National Institute of Standards and Technology, 2002).

Examples
An attack that compromises the operations of the organization to the extent that it cannot execute one or more of its primary operations.

Extensive damage to the organizations enterprise infrastructure.

Massive financial loss
Massive loss of life or extensive personal injuries to individuals.Moderate (50)The impact of loss of confidentiality, integrity and confidentiality could be expected to seriously affect the operations and assets of the organization andor individual.

Examples
An attack that compromises the operations of the organization to the extent that its execution of one or more of its primary operations is significantly degraded.
Significant damage to the organizations enterprise infrastructure.
Significant  financial loss
Significant personal injuries to individuals.Low (10)The impact of loss of confidentiality, integrity and confidentiality is expected to have limited adverse affect on the operations and assets of the organization andor individual.

Examples
An attack that compromises the operations of the organization to the extent that the execution of one or more of its primary operations is noticeably degraded.
Limited damage to the organizations enterprise infrastructure.
Limited  financial loss
Minor personal injuries to individuals.

Magnitude of impact The risk assessment also measured the impact of a successful attack on the system. The risk assessment report identified the following three security goals as facing the greatest risk in the event of a successful attack on the system (Ferris, 2002).

Confidentiality-Loss of confidentiality as a result of unlawful disclosure of private and sensitive information (e.g. Data Protection Act, Privacy Act).

Integrity-Loss of data integrity as a result of unlawful and unauthorized access to the system.
Availability- Denial of service attacks impact on the systems functions and operations.  
Table 4 Risk Calculation formula
Magnitude of ImpactThreat LikelihoodLow (10)Moderate (50)High (100)High (1.0)Low risk
(10x1.010)Moderate risk
(50x1.050)High Risk
(100x1.0100)Moderate (0.5)Low risk
(10x1.010)Moderate risk
(50x1.050)High Risk
(100x1.0100)Low (0.1)Low risk
(10x1.010)Moderate risk
(50x1.050)High Risk
(100x1.0100)Scale Low (1 to 10) Moderate (10 to 50) High (50 to 100).

Risk Determination The risk assessment adopted the following threat model to determine the level of risk to the IT and security system (National Institute of Standards and Technology, 2002).

Threat likelihood- Likelihood of a given threat attempting to implement an attack through weaknesses in the system.

Magnitude of the impact- Impact of a successful attack on the IT and Security system through weaknesses in the system.

The effectiveness of existing and mitigating measures to neutralize andor eliminate risks.

Table 5 Risk Level Definition
Magnitude of ImpactDefinitionHighA strong case for urgent mitigating and corrective measures to be undertaken is evidently visible to ensure the existing system continues to operate, system shutdown or a stop to all system integration efforts or testing may be required. (BMC 2009).  ModerateA case for planed and focused corrective and mitigating measures to be undertaken within a defined time frame is required to ensure the system continues to operate (National Institute of Standards and Technology, 2002). LowThe risk to the IT and security system must be weighed against the overall risk plan of the organization and decisions made whether mitigating measures need to be undertaken or if the organization can operate with the residual risk to the entire system.

If the risk level is considered to be very low or negligible it should be recorded to ensure that all potential risks are recorded and identified for future reassessment and analysis to determine there threat level and likelihood.

System Characterization
 Technology components
 Table 6 Technology components
ComponentsDescription (MS technology infrastructure)ApplicationsIn-house development using Java Enterprise technologies where most appropriate (Kralj-Taylor, 2009).DatabasesSybase, DB2 UDB,DB2 mainframeDistributed ComputingMessage oriented, loose coupling, Xml messages, Binary-Xml encoding, Soap, Fix, XML content based pub-sub xml routersLanguagesC, Java, C ,Perl, Python, A, other dynamic languagesOperating systemLinux, Solaris, Windows, MainframeNetworksCisco Routers, FirewallInterconnectionsInterface to IBM grid, BMC BladeLogic (IBM, 2004). ProtocolsTCP,HTTP,IBM-MQ, Persistent-TCP, Optimized data transportPhysical location(s)

Table 7 Physical location
LocationDescription (MS main physical infrastructure location)Data centerBrooklyn NY Utah (BMC, 2009).Help DeskNew York Plaza, NYHeadquartersTimes Sq. NY (Kralj-Taylor, 2009).
Data Input into System
Table 8 Data into System
DataDescriptionPersonal Identification Number (PIN)The main personal data that goes into the system includes
Name
Address
Phone Number
SSN
DOBFinancial InformationThe main financial data that goes into the system includes
Credit card number
Credit Card Verification code
Expiry date
Card type
Authorization reference code (BMC 2009).

Transaction reference codeLogin InformationThe main way to access the system is through login information, the data that is goes into the system includes
Username
Password (BMC 2009).System Users
 Table 9 System Users

UsersDescriptionMS ClientsAccess system via ClientServ web portal using a web browser. Clients can view account summaries, evaluate gains and losses in their accounts, trade in securities, evaluate account going-on, transact business and download tax information (BMC 2009).MS Financial advisorsAccess system via Ideas web portal, Manage Clients banking portfolio such as reviewing clients account data, statements and rebalancing of clients portfolio (Daula, 2006).MS IT PersonnelApplication deployment team tasked with planning, scheduling, co-coordinating changes to propriety software and verifying impact of those changes. Network and Data Management team tasked with management of enterprise infrastructure (IBM, 2003).MS OperationsUtilize the information in ClientServ and Ideas web portal databases for change management and business continuity planning (BMC 2009). MS OfficesMake use of web portals for in the flesh reinstatements of clients accounts.Data flow diagram
The data flow diagram represents partial technology components of the web portal system.

Vulnerability Statement
The following vulnerabilities were revealed in the risk assessment
Table 10 Identification of risks

VulnerabilityDescriptionCross-site scriptingWeb application used as a mechanism to launch attack on end users web browser. An end users session token can be used to spoof content to fool the user compromising data integrity. This could lead to identity theft which could cause massive credit fraud (BMC 2009).Wet-pipe sprinklers in MS Data CentersFire in data center can trigger sprinkler system to release water which can compromise availability of data in MS enterprise. This could shut down the operations of the organization causing massive financial losses and lawsuits.Unused User Identifiers Unlawful and unauthorized use of user IDs by malicious users can compromise the integrity and confidentiality of MS data. Identity theft can lead to credit card fraud and insider trade which can lead to financial losses to Morgan Stanley (BNAC, 2007).  .Uncorrected Flaws Malicious exploitation of security flaws in the system can compromise integrity and confidentiality of MS data.

This can lead to undetected fraud and transactions which can cause losses and exposure of personal data to unauthorized individuals which can cause lawsuits (BMC 2009). SQL injectionMalicious use of web application to launch an attack of backend components through exploitation of security flaws in the system where web requests are not validated before being accessed by the web application this could result in compromise of confidentiality and integrity of MS data (BMC 2009).Passwords weaknessesPasswords could be easily guessed and used to gain access to the system unchanged passwords could compromise data integrity and confidentiality through identity theft (BNAC, 2007).Scripts and Initializing files Malicious exploitation of passwords and user names in scripts could result in loss of confidentiality and integrity of MS data.
Threat Statement

Table 11 Threat statement
Threat SourceMotivationThreat ActionshackersHackers are likely to be motivated by the thrill of
Challenge
Rebellion
EgoSocial engineering
Unlawful and unauthorized access
IP address hijacking
Website defacement
System intrusion
BlackmailCyber terrorists Cyber terrorist are likely to be motivated by
Money
Destruction of information
Exploitation (BMC 2009).
RevengeCyber terrorism
Spoofing
System beak-ins
IP spoofingEmployeesEmployees are likely to be motivated by
Curiosity
Money
Intelligence or corporate espionage
Sabotage worms, Trojans, viruses (Jaques, 2005).
Unlawful and unauthorized system access
System bugs
Malicious browsing of confidential information (Redeyof, 2009).
Fraudembezzlement Environment
No applicable motivational factor involved in this case.
Natural disasters
Earthquakes
Flooding
TornadoesSystem failureNo applicable motivational factor involved in this case.
Air conditioning failure
Communication failure
Fire
Human error
Power loss    Risk Assessment Results

Table 12 Risk Assessment Results
Item NumberObservationThreat SourceVulnerabilityExisting controlsLikelihoodImpactRisk ratingRecommended controls1Cross-site scripting HackersCross-site scripting  Validation of headers and cookiesMediumMediumMediumRequire validation of all parameters i.e. cookies, hidden fields, cookies, query strings against system specifications (BMC 2009).

2Fire could activate wet-pipe sprinkler compromising system data in MS data centerEnvironment  System Failure
wet-pipe sprinklersNo relevant controls to mitigate against this risk.ModerateHighModerateNone. Residual risk accepted.

3Unlawful and unauthorized use of unused user identifiersHackers Employees  Unused user identifiers
Controls are in place but not enforcedModerateHighModerate Require verification process of terminated accounts with agreed timeline. Continuous employee education (Redeyof, 2009).

4Malicious exploitation of security flaws can be executed in unpatched applicationHackers  Employees Uncorrected Flaws
Careful monitoring of advisories and patch releases is in place. Enterprise IT infrastructure is protected by firewall. Employees pose greatest risks. ModerateHighModerateImplementation of procedures and timely review and timely updates of vendor patches. Require Automated system notification of system updates. Continues employee education (Reich  Benbasat, 2000).

5Extraction and modification of database information through SQL commands insertion in form fieldsCyber terrorists and HackersSQL injection Limited Form field Input validationHighMediumMediumRequire all parameters to be validated and data integrity enforced. Centralization of system libraries and components should be implemented and enforced to enhance effectiveness of validation process (Dhillon.  Backhouse, 2001).

6User password remain unchanged or easily guessed and crackedHackersPassword weaknesses and effectivenessChange passwords regularly, password must be alpha-numeric and at least 6 characters.ModerateModerateModerateEnforce mandatory use of special characters

7Malicious exploitation of initialization files and scripts can easily take place compromising integrity of data and confidentiality Hackers Scripts and Initializing filesNo clear text passwords allowed in initialization files and scripts. Access restriction to back office operations and workstations.ModerateHighModerateEnforcement of corporate policy and practices and stringent security measures to corporate databases (Hirsh  Ezingeard, 2008).Though it may not be practical to address all identified risks to the IT infrastructure, priority should be given to threats that have the potential to cause significant impact to an organizations mission, environment and objectives. The approach one organization my wish to use to mitigate against these risks vary from another organization, but what is common is that there are a range of appropriate mitigation technologies available from various security vendors which can be used in addition to adopting administrative measures which can be technical and non technical (Hirsh  Ezingeard, 2008).

Conclusion
All organizations that have automated there systems and operations in there effort to meet there corporate mission and vision in this digital era must put emphasis in the protection of there corporate informational assets, this can only be achieved by putting up a risk management plan and policy whose main objective is to protect the organizations ability to meet its objectives and goals (Reich  Benbasat, 2000). The risk management policy should not be seen as a purely IT or technical function but as a critical part of an effective and efficient security program for the organization which all players in the internal and external organization functions need to involve themselves in starting from the management, IT staff, Operational staff and clients. Morgan Stanley has taken a huge step towards ensuring that a risk management plan plays a huge part in its overall strategic goals.