Consequences of global warming for organisms

Global warming is the greenhouse effect that is brought about by emission of carbon into the atmosphere. When fossil fuels burn, carbon dioxide is released into the  atmosphere. Carbon together with other greenhouse gases forms a layer that allows visible light from the sun to reach the atmosphere but prevents degraded infrared radiations from escaping back into space (Bruce, 2002). The radiations are irradiated back to the earth surface resulting in accumulation of heat raising the temperature of the atmosphere.Global warming has had a number of effects to the atmosphere, world economy, environment and health as well. This has brought along with it consequences such as floods, droughts, wild fires, tsunamis and other nature events of great magnitude (Robert, Scott  John 2009).
 Human beings have been directly exposed to climatic changes through change in weather patterns and water levels which have had effect on their air and quality of food. There are also changes in ecosystems, agriculture, commerce and settlements which in return have had effects on the economy (Stuart, 2005).

Some regions such as the north and the South Pole have also been adversely affected leading to the increase in sea level. Because of that increase, some islands have been submerged leading to migration of people in search of alternative land.

Climatic changes have affected the health status of people leading to malnutrition, increase in deaths and injury brought about by extreme weather conditions.

Due to the increased carbon dioxide in the atmosphere, there is increase in hydrogen ion levels in the ocean water brought about as a result of excess carbon dioxide dissolving in ocean water, which has led to a decrease in ocean PH (Global Warming Both Sides, 2003). Since the aquatic life is sensitive to changes in PH and temperature, it may lead to changes in distribution of certain organisms and death of others. These organisms may die as a result of depletion of oxygen as a result of saturation of ocean water by carbon oxide.

LANWAN security of databases in the cloud

Literature review
There have been many discussions that have been written on cloud computing. McEvoy and Schulze discuss the limitations that were experienced with grid computing. These authors believe that the flaws that was experienced with grid computing. One of the problems of grid computing is the fact that it exposes too much detail of the underlying implementation thus making interoperability more complex and scaling almost impossible (McEvoy,  Schulze, 2008). Instead of this being a flaw it became one feature of grid computing. When someone is looking for solution at a more abstract and higher level, that is where cloud computing becomes handy and plays a big role.

Jha, Merzky,  Fox also give a descripti9on of clouds as providing higher level abstraction through which services are delivered to the customer. It is widely agreed that the difference between the cloud and grid is the complexity of the interface through which the services are delivered to the customer and the extent to which the resources underlying are exposed. With cloud computing, the interfaces of higher-level cloud restrict the services to off-the-shelf software, which are deployed as a generic shared platform (Jha, Merzky,  Fox).

Conceptual framework
The theory that this paper will come up with is the fact that computing, with the cloud computing, is effective when centralized. This is because the entire buzz in this area of cloud goes back to centralization. The recent developments in computing shoes a very interesting fact that computing is shifting back to centralized services just the ones we had in the 20th century. We can thus say that the pendulum is swinging back to its original place. The theory behind all these is the fact that computing is going back to the old days of centralized infrastructure. It is therefore worth noting that computing is more efficient when they are centralized.  In my own view, there is the development of computing basing on virtualization. This is because virtualization has been the main pillar in the coming of cloud computing. All the concepts of cloud computing have all originated from virtualization technologies. A brief overview of virtualization shows that cloud computing is in itself a subset of cloud computing.

Berry et al (2005) indicates that the concept of virtual machines started to be in existence since 1960s the time when IBM first developed the act of concurrent and interactive access to a mainframe computer. Each individual virtual machine used to give users the simulation of the real physical machine thus giving them the services that could have been there if they were accessing the machine directly. This gave way to a good, elegant way of sharing resources and time. This also gave way to reduction of costs on the ever-soaring costs of hardware. Each of the virtual machine was fully protected so that each was a separate copy in the underlying operating system. Users could run, and execute applications concurrently without fearing the occurrence of crush in the system. This technology was therefore used to reduce the cost of acquiring new hardware and at the same time improving productivity because users could work at the same time on the same machine.
There has been the practice of this technology in storage devices whereby they have been divided into partitions. A partition is a division which is logical done on the hard disk drive to simulate the effect of two separate hard disks.

The act of virtualization in operating system is where there is the use of software to enable a piece of hardware to run more than one operating system images simultaneously. This technology got its boost from mainframes ten years ago which allowed administrators to bring to an end a waste of expensive processing power.

Virtualization software was adopted at a very fast rate than ever imagined. Even Information Technology experts embraced this technology. Virtualization has been applied in three areas of Information Technology. These areas include networking, storage and servers. Network virtualization is the method of combining the available resources in a network and by splitting the available bandwidth into several channels each of these channels is independent of each other and can be assigned to a particular server or device in real time. The main idea behind network virtualization is so that the network can be divided into different manageable parts.

Storage virtualization is the act of pooling physical storage from multiple network storage devices so that there is a simulation of a single storage on the network which can be managed centrally. This technology is what has been popularly known as storage area networks (SANs).

Server virtualization is the masking of resources that is used by the server which include the number of individual users on the servers and the processors in the servers from the server users. The main aim of server virtualization is so that the user is spared having to understand and manage the complex details of the server resources while striving to increase sharing of resources and utilizing the capacity so that it can be expanded at a later time.

The technology of virtualization can be seen as a subset of the overall trend in information technology where it includes autonomic computing which is a scenario where the environment for information technology can manage itself based on perceived activity and utility computing which is where computer processing power is a utility where clients can pay only as needed. The main aim of virtualization is so that administrative tasks are centralized and improvement of scalability and work-overload is achieved.

From this computing trend, it is clear that computing is headed in developing more and more virtual hardware so that the real hardware is not seen per se but their work and presence is tremendous. This explains the reason we have virtual partition drives in computer hardware, the presence of grid computing.

There are many papers and proceedings which discuss SaaS, cloud computing, virtualization, and grid computing. Several of the most useful references are summarized in this section. The references for both the support and conflicts of the various definitions are all included.

The have been various views about the cloud model. Some authors have argued that cloud computing model incorporates popular trends such as Web 2.0 SaaS, and DaaS. The main aim of all these revolutions is so that we may change the way we compute and shift absolutely from desktop based computing to services and resources which are hosted in the cloud.

There have been other explanations about cloud computing that gives the distinction between cloud services and cloud computing. He argues that a cloud service is any business or consumer service that is consumed and delivered over the Internet in real-time. Cloud computing on the other hand consists of a full information technology environment which consists of all the components of network products that make the delivery of cloud services a reality. This is what enables cloud services to be performed.

Another definition of cloud computing is that it is a style of computing where large and scalable information technology activities are provided as a service using Internet technologies to external customers. Cloud computing are characterized by their self-service nature where users customers acquire resources any time they wish to use these services as long they have an Internet connection and can get rid of these services when they are no longer interested in these services.

A cloud computing system is the environment where the consumption of cloud services is enables and made possible. Cloud computing is a new way where capacity is increased, capabilities added and functionalities exploited without the need to add any infrastructure to the system, train new skills or acquisition of a new software license. In this new setup, the services can be categorized into concepts depending on the needs of the consumer. These categories include Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), managed service providers (MSP), and utility computing which deals with products, services, and solutions that are consumed over the Internet real-time. The users of cloud computing do not possess any infrastructure of the system because there is no initial investment in serves or software licenses. They instead use the resources as a service and pay for the use of these resources which are supplied by the service provider. In this case, most cloud computing providers have options which feature computing items which range from lower-powered system units to units which require extensive multicore CPU systems which require more resources for their operations.

There have been discussions about grid concerning their relationships with cloud computing. Kourpas argues that grid has a set of resources which are physical in nature. Grid provides a way of accessing broad sets of resources. It also provides a way of interactions of IT resources thus giving a way of changing business requirements (Kourpas, 2006). Kourpas has come with five areas where grid computing is used more often which are discussed below

Business analytics
Design and engineering
Development and research
Government development
Enterprises

Kourpas also outlines the evolution of grid computing by showing how virtualization has been developed up to the advanced stages they are today. An important first generation of virtualization development is the logical joining of like resources (Kourpas, 2006). A second stage is bringing together resources which have different platforms like application servers, storage, database, and file systems. All these resources are singly managed by virtualization. The final stage is bringing together grids across organizational and boundaries of the company. Many technology professionals refer to this technology as cloud computing. Cloud computing is one of the latest technologies that is the buzzword in the technology sector today. Many companies are bracing themselves for the use of this technology to leverage their operations and come up with cheaper storage solutions for their businesses. Companies have gone a notch higher to set up their own clouds and thus the coming up of private clouds in companies. The following sections will look into the structure of private cloud as compared with other cloud computing systems like the public cloud and hybrid cloud. It will also look into how companies are using this technology to leverage their businesses for more profitability.
It is through the work of Foster, Kesselman, and Tuecke (2006) who gave the details of grid architecture from the perspective of the Globus Alliance. They break the architecture of the grid into three layers of which each layer provides unique functionalities. These services are offered from within the support of grid versus external grid services like P-a-a-S which is offered to external customers. The grid architecture includes the hardware resources that make up the physical layers in the architecture of the grid.

Messinger,  Piech, (2009) discusses the architectures that are available for cloud computing. According to them, there are three types of cloud computing architectures. The first is the public cloud. This gives a description of cloud computing basing on the traditional mainstream sense. In this, resources are provisioned in a dynamic way and are on a self-service fine-rained basis over the Internet. They are given to the consumers via web applications or web services, from an off-site provider and charge the use of these services on a fine-grained utility computing. Utility computing is the category of computing where consumers need not to purchase licenses and install on their on-site servers. They simply access the service they require from the off-site providers as long as they get access to Internet connection. These services are the computer applications that are normally installed and used on servers within an organization. What they do is to pay for a subscription fee which is proportional to the service used.

The second architecture, discusses by Messinger, and Piech (2009) of cloud computing is the Hybrid computing where there is a composition of multiple internal andor external providers, which is typical for most enterprises. A cloud can be a description of a local device, say a Plug computer with cloud services. It can also describe a configuration which is a combination of virtual and physical assets, for example most environments which have been virtualized require physical servers, routers, or other hardware such as a network appliance acting as a firewall.

The last architecture of cloud computing is the private cloud also known as the internal cloud. These are offerings that represent cloud computing on private networks. This type of cloud computing has been widely claimed to provide benefits like capitalizing on data security, corporate governance and reliability concerns. The disadvantage comes in the sense that consumers still have to buy, build and manage them, which defeats the reason why they shifted to cloud computing in the first place. This also does not benefit from lower-front cost of capital and has less hands-on management, which essentially makes it lack the economic model that makes cloud computing such an intriguing concept. Research has shown that cloud computing will be headed this way in few years to come.

Berry,  Djaoui et al, 2005 discusses the securities that are associated with cloud computing. They did not discuss the data that reside in the cloud.  Cloud computing has some attributes which must be assessed so that all matters of security and privacy is well tackled. The areas of data integrity, privacy of data, recovery of data, and evaluation of legal issues needs to be critically analyzed for risk to be minimized. Cloud computing providers like Google and their Apps engine, Amazon with their EC2 are providers whose computing can be defined as that with scalable IT-enabled capabilities which are delivered as a service to external clients by use of Internet technologies.

It is therefore imperative that customers must demand proper explanation of security policies and should know the measures that these providers are taking in place in order to assure their clients that they will not be exposed to security vulnerabilities in their course of their use of these services. They should also be able to identify vulnerabilities which were not anticipated at first.

The first issue to be considered when deploying cloud computing is the privileges given to users in order to access their data. Data which are stored outside the premises of an enterprise brings with the issue of security. Hoe safe is the data Who else assesses the data Data which have been outsourced bypass the controls of the personnel of the enterprise. The client should get as much information as possible about how the data is stored and how the integrity of this data is catered for. The providers should be asked specific information about their hiring of privileged administrators who will manage the data.

The second issue to be considered is the regulatory compliance. The consumers are responsible for the security and integrity of their own data even when this data is held and stored by other providers. In the case of traditional service providers, they are subjected to external audits by auditors who will normally check on the security policy of that enterprise. The cloud computing providers should accept to undergo these external audits and this should be agreed upon in written form.

The other security policy to be considered is about the location of the cloud. In most cases, consumers do not know where the cloud is located and even dont know which country it is. What they care is that their data is being stored somewhere. The providers should indicate, in written form, their jurisdiction and should accept to obey local security policies on behalf of the consumers.
Another issue is that consumers should be aware of the security breaches present with providers. Providers have always claimed that security is at its tightest in the cloud but this fact alone is not enough to assume security issues. It is good know that all security systems that have been breached were once infallible and so with newer technologies, they can be broken into. An example is Google which was attacked in 2007. Their Gmail services was attacked and had to make apologies. With this in mind, it is a good lesson to learn that even though systems might be tight in the cloud, it is not a full assurance that they will never be hacked. While providers of cloud computing face security threats, research has shown that cloud computing has become very attractive for cyber crooks. As the data become richer in the cloud, so should security become tighter

The Magic of Business Intelligence Software

Abstract
This research paper describes business intelligence software and its capabilities.  It further discusses organizational expectations of such systems.  Even though businesses would like to develop their own business intelligence software to handle all sorts of organizational tasks, costs related to the development and installation of such software, in addition to training required by ultimate users of the system, remain as a significant deciding factor.  Businesses may choose to purchase business intelligence software already available on the market, for example, Microsofts PerformancePoint Server 2007.  Such systems may be customized to meet organizational needs.

The Magic of Business Intelligence Software

Business intelligence refers to activities undertaken by a business to gather essential information about its competitors or the market.  Business intelligence systems, on the other hand, tend to be interdepartmental information gatherers (McGuigan, 2010).  With an emphasis on speedy retrieval of information, these technologies rely on data fed into them by data gatherers to relay to departments that require this information.  The auto-dissemination function of business intelligence systems is generally used for this purpose (Luhn, 1958, 314).  But, these systems may also perform the data mining function on their own before crunching that data in a highly efficient manner.  For example, business intelligence software may be designed to gather important information about competitors (McGuigan).  Likewise, this technology makes it possible for businesses to predict scenarios from the future with predictive analytics, combining data mining and statistical analysis (Kelly, 2009).

Business activity monitoring or complex event processing, in addition to text analytics and column-based databases are additional functions of business intelligence systems (Kelly).  As incredible as these systems appear, organizations are required to weigh the costs of business intelligence software against its benefits before installing it.
   
Undoubtedly, business intelligence software is not only useful in collecting and disseminating information between separate departments, running reports, and making predictions based on past performance but also in conducting analyses of the external environment of the company, including market analysis based on latest economic trends (King, 2009).  Moreover, this technology is meant to help organizations with customer relationship management.  After all, employees using the system are able to speedily retrieve necessary information in order to satisfy customers (Jasra, 2009).

Besides, information about the external environment of the company provides ideas for new business initiatives, thereby allowing for business expansion (McGuigan).
   
The following excerpt from a report entitled Execs Take BI into Their Own Hands, published in eWeek, explains organizational expectations from business intelligence software  
       
         If you ask any given group of executives what theyd really like in a business intelligence
         product, the wish list would look a lot like the description of Cosmic AC, the universe-
         spanning computer in Isaac Asimovs classic short story, The Last Question. Cosmic AC
         contained all knowledge and could answer any question it was asked. After the first couple
         of releases, the IT department wasnt involved.
             
         And that is what the executives in many companies want-to have their business
         questions answered immediately, and without needing to involve the IT department to
         formulate the questions and provide the reports. In short, they want to be able to draw data
         from a wide variety of sources and use that data to discover relationships that were
         previously unsuspected, but which can impact their businesses, and to do it immediately by
         simply asking the right question.
             
         I think its where the whole industry is going, explained James Kobielus, an analyst
         in Forrester Researchs IT Client Group. Users want to do self-service BI. (Execs Take
         BI into Their Own Hands, 2010)

In other words, business executives are expecting magic from business intelligence software.  They know such software can support informed decision making, and increase organizational productivity as well as profits (Sekunjalo acquires control of Blue Chip Business Intelligence Software company, Synergy Computing, 2005).  However, instead of engaging IT departments for the development of business intelligence software that meets organizational needs, businesses would like to do it all by themselves.  A company by the name of iDashboards has developed a business intelligence dashboard for an insurance company in Michigan.  Another company, Pegasystems Inc. has developed special software to help businesses comply with the Sarbanes-Oxley Act.  Likewise, a U.S.-based organization helps Indian businesses to analyze information obtained through business intelligence software (Solutions, 2006).  But, what if organizations were equipped to handle all of the above on their own  In that case, companies would not only save time but also money expended on IT departments.
   
Of course, it is entirely possible for businesses to create their own IT departments.  Yet, it is most cost-effective to purchase business intelligence systems already available on the market, for example, Microsofts PerformancePoint Server 2007, the main functions of which are monitoring, analysis, and planning.  The monitoring function of this software includes scorecards, performance indicators, and dashboards which may provide essential information about market share, inventory turnover, training, customer satisfaction and much more.  This data may be analyzed, and budgets, strategic plans and sales forecasts created with available information through the planning function (Utley, 2007).

Furthermore, this business intelligence product may be built into Microsoft Office.  In other words, users of the system would not have to learn about new software after installing PerformancePoint Server 2007 (What is PerformancePoint Server, 2007).  Thus, companies can save on training costs.  
One problem remains with the installation of the PerformancePoint Server 2007, however.

Businesses that choose this technology must also install Microsoft SQL Server 2005, a platform for business intelligence, data integration, and reporting that the PerformancePoint Server 2007 must be built on (What is PerformancePoint Server).  Although this makes PerformancePoint Server 2007 inconvenient to employ, the SQL Server 2005 by itself does not have any unusual hardware requirements.  An ordinary CD drive, a mouse, and a monitor are sufficient.  However, users of the system require some additional software, including IIS 5.0 and ASP.NET 2.0 for the reporting function of the business intelligence system (Hardware and Software Requirements for Installing SQL Server 2005, 2008).
   
For businesses that find it expensive to create their own IT departments and develop their own software before training their employees to use the same, the SAP BusinessObjects Edge is another business intelligence system to purchase.  This system is certainly easier to install, as it does not require the organization to purchase another package of business intelligence software.  In fact, SAP BusinessObjects Edge may be installed on Windows, Red Hat, and Novel SuSE Linux Machines with little or no difficulty as the installation process is guided by the software itself.  Moreover, this product uses the Web interface to specialize in the following functions reporting, queries, and analysis.  Thus, SAP BusinessObjects Edge promises to make the decision making process easier for businesses that employ it (SAP BusinessObjects Edge A Comprehensive Solution to Analyze Your Business End-to-End).
   
Both PerformancePoint Server 2007 and SAP BusinessObjects Edge may be customized if a business would like to gather and analyze information about its external environment in addition to its own key business activities.  Another business intelligence system  the IBM Cognos 8 Special Edition  similarly promises the capability of being customized.  Reporting, analysis, dashboards and scorecards are also available on this system.  But, the IBM Cognos 8 further offers a platform for performance management to boot.  Moreover, like the SAP BusinessObjects Edge, this business intelligence software requires the organization to deploy new software or purchase new hardware.  The existing infrastructure is adequate for this system.  It may operate on Microsoft Windows, Red Hat, Intel x86, and UNIX among other platforms (Cognos 8 Business Intelligence Special Edition, 2008).
   
Clearly, IBM Cognos 8 BI Special Edition is the best business intelligence system among the three described here.  This system is not only hassle-free to install but also includes the greatest number of functions.  Of course, the other two systems may also be customized to include the performance management function.  Then again, there are many other functions that a company may like to customize the system for, including external environment analyses functions.  In fact, the sky is the limit as far as organizational expectations of business intelligence software are concerned.
   
As new developments in information technology go on attracting new users to such software, and businesses are urged to realize increasing benefits of these systems, costs related to the installation and use of business intelligence software remain as an essential deciding factor.  Undoubtedly, organizations would like to view such systems as magical machines to analyze all kinds of data for informed decision making, and they would like to do it all on their own so as to save on costs.  But, it is vital to employ IT departments  either inhouse or outhouse  to install business intelligence software to meet organizational needs, or for analyses of information obtained through this software if a company does not have trained business staff to do so.  Moreover, as our discussion reveals, it is most cost-effective to purchase business intelligence software already available on the market and customize it.  Then again, if such systems do not meet a particular organizations requirements, it would be better to have specialists develop business intelligence software for the specific purposes of the company.  Either way, it is essential to weigh the costs against the benefits of such software before deploying it.            

ISDNDSL

Today all businesses and industrial setups are based upon the mean of networking and communication. Hence, the most commonly used mean of communication is either phone, which includes landline, cell phones and pagers and the other medium of communication is internet. For communication to be faster and reliable the communication sector of the world is getting transformed by the new inventions and technology. While the computer manufacturing and communication processes keep on growing, the need for advanced speed internet access and communication is also rising. Games and digital media both have the potential to be performed over the internet. To perform these tasks in an easy manner, a high speed internet connection is needed, if it is not present, that not only bothers the user but it also fails to perform tasks efficiently (summers, 1999).

Therefore different methods of communication are present and everyone has different opinions about what they use, some people use ordinary modem while some can manage wireless networking to perform a simple communication. But some methods are very common in household and businesses which include the connecting of internet via LAN, WAN, modem cable modem, ISDN and DSL. All of these methods are fulfilling their respective purposes, however, ISDN and DSL are two of the most commonly used communicational and internet accessing devices.

ISDN
ISDN stands for Integrated Service Digital Network it is a CCITTITU combination which is used for broadcasting digital data over common telephone copper wires and other connectivity media. This devices adapter  is used in place of a simple communication modem in home and some businesses to give a higher speed up to 128kpbs, unlike an ordinary telephone modem which is capable of only giving speed up to 56kpbs (kilo byte per sec). This means of communication requires an adapter at both ends of the communication therefore the provider of ISDN also has to have an adapter (Harte  Flood, 2005).

Homes and offices use analogue lines for landline numbers, so the handset picks up the speaker voice and transmits it over to the phone line as an analogue. The modems that are used in homes, convert the digital signals from the computer in analogue signals so that they can travel through the usual copper phone line. The modem converts the phone signal into the digital form providing connectivity and it is said that 56kpbss is enough for home use but as the communication skill and equipment is increasing it is insufficient. So in this case ISDN is preferred which can give up to the speed 1.4Mbps however according to researchers 128kpbs speed is more than the usual in digital technology so its not very necessary that the speed has to be more than 128kpbs while for the communication purpose ISDN uses the Unshielded Twisted Pair (UTP) cables. The international standard for contact is digital technology and all the methods of contacting, be it by sending data, video or any voice data, is done over these ISDN digital phone lines.

There are two types of ISDN networks which are quite usual in use and include BRI (Basic rate interface) and PRI (Primary rate interface). The BRI comprises of dual 64 B-channel and a D-channel for a smooth and managed transmission of information.

Primary rate interface in United States has 23 B-channels and a D-channel and 30 B-channels and one D-channel in European countries. Moreover there is another version of ISDN known as Broadband ISDN (B-ISDN) and it uses a broadband transmission, so it is used in cases where a higher speed is required rather than the usual ISDN speed and has a capability of transmission of data up to 1.5Mbps. The B-ISDN lines need fiber optic cables for better performance while the usual UTP lines are used in case of ISDN band transmission.

An access to B-channel ISDN network is required for the normal ISDN phone line users while other special devices that are required by some of the users are known as terminal adapters. These detailed devices correspond with other ISDN devices or telephones for communication purpose (Harte  Flood, 2005).

ISDN is operated on the last three layers of the Open System Interconnection (OSI) reference model, and every layer uses a diverse requirement to send out the data. In a regular telephone line or in an analogue network a single communication channel is available by the companies. Therefore only one facility can be carried at a time, i.e. voice, video or data at single time. However in ISDN lines the similar telephone lines are sensibly partitioned into numerous channels, they are generally divided into two channels (Bhatnagar, 1997).

The first type is of B- Channel and almost 64Kbps of data can be transferred using this channel. In general ISDN lines have two B channels one channel is committed for voice and the other channel is for data communication, this whole data transmission occurs on one couple of copper wire (Harte  Flood, 2005).

The D-channel or delta channel is the second type of the channel used for setup of line and calling system. While the other types have 16kpbs of bandwidth and are not used very often as more speed provided by a phone modem.

When only ISDN is being considered the speed is said to be its highest priority and its basic advantage. This is because the usual dial-up modems used by the computers have a limit of 56kpbs and after getting connected due to many reasons this speed is decreased to only 45kpbs or even less where the ISDN contains multiple digital channels which work simultaneously on the same line on which modem is not able to provide full speed. Also if the telephone enterprise provides digital connection then the speed is made better and changes occur which are the transmission of digital signals data from telephone line rather than the normal analogue signals. So naturally the digital signals boost up the speed of ISDN lines and better results are obtained (Harte  Flood, 2005).

The other advantage of ISDN is the provision of multiple services on a single line like in case of no ISDN line if the consumer wants to fax, telephone, video conference and use the credit card machine at the same time then all these functions would require a separate line for communication while in ISDN all of this would be done on a single copper line as this technology supports simultaneous working. Other advantage of ISDN is that it takes almost 2 seconds to connect while a simple modem takes up to 30seconds or more (Bhatnagar, 1997).

The research and development is being done on the B-ISDN technology as it is said to be of higher speed. However there is sill a lot of room left for the improvement of ISDN and in it the changes in voltage level, control of ring and many other properties need to be improved and this standard working is said to be the national ISDN. Also improvement would be done in application sector as some applications are going to be useful for using ISDN and like ATM the broadband category of ISDN is also being researched, B-ISDN is very much related to ATM because ATM provides a reliable data encapsulation system that is used all over the network, beginning with TE1 or TA equipment, and covering each portion of telecommunications tools in use. So much is its importance that quite a lot of people believe ATM to be B-ISDN. People replace their phone communication system with ISDN phone network as now the companies provide the whole thing in an equipment built box which is easy to install, provides Ethernet connection using IP and has an NT1 providing two phone jacks (Brewster, 1992).

The regularity issues involved in this certain mean of communication are the policy issues which describe and demand the identity of the provider and his ownership, issues regarding the location of the provider and the receiver, the factual basis of the financial provider of the company and the technology.

Allocation of rules and regulation on the billing procedure features should be implied. The other technical issues that occur are interconnection of multiple numbers of ISDN carriers, utilization of special packages and the difference between voice and invoice performance. The other is technological advances on the interface if bit rate is converted to 64kpbs rather than 32kpbs. So these issues are soon going to be dealt with and solution would be acquired (Brewster, 1992).

Therefore, the ISDN guarantees a number of high-performance, multiservice digital networks with capability for worldwide connections. Even though some of the basic elements of ISDN are being used several places still use analog switches and in-band signaling. Microwave services are still being used for broadcast over major sections of the long-haul set of connections and these utilize analog frequency modulation for voice and data. We can expect to see many changes in the telecommunications engineering and technical advances in ISDN (Harte  Flood, 2005).

DSL
Digital Subscriber Line (DSL) is a service of fast pace internet like usual cable internet. It provides high speed networking on normal phone lines due to the presence of broadband modem. The quality of DSL is that it provides internet service and telephone service at the same time on the same phone line. This allows the user to continue with their voice or internet connection without disconnecting any one of them. DSL is said to reach up the speed of 8.448Mbps however the normal rate is approximately 1.544 or lower, so here fore has now more preference over other connections. Today various homes and businesses are using this network technology however it has a limited work area or distance and is not supported at the places where there are no telephone lines (Smith, 2007).

Two kinds of high speed internet connections are used in household and businesses, cable modems and DSL. TV cable connection wired in most homes run the cable modem while DSL, as mentioned, transfers data through existing phone lines and does not interrupt the telephone line used for talking. As compared to ordinary copper phone line they both are quite fast but they differ from each other in speed, reliability, cost, service and availability of service (Bourne  Burstein, 2001).

In order to understand how the DSL works and what technology is behind it, we will take a look at the normal standard telephone. When a phone call is made the voice signals are carried along the copper wires to the receivers handset so these conversational frequencies vary in different ranges from 0 to almost 3400 hertz. So by restraining the frequencies that travel by copper lines, the telephone packs many wires in little space and does not create interference between them. In olden days analogue signals were sent through this procedure for communication while today through the advancement of technology the phone sends digital data signals without taking up a lot of capacity of the telephone wires. This is how a DSL works (Smith, 2007).

There are many types of DSL which includes the following,
ADSL Asymmetric Digital Subscriber Line (ADCL) is a DSL type that became most known to household and small production consumers. It is asymmetric as most of its dualbandwidth is dedicated to the downstream path, through which the user gets the data. In most situations, the active phone lines work with ADSL while in a few areas, they may require advancement (summers, 1999).
CDSL Consumer DSL (CDSL) is a DSLs adaptation, registered by Rockwell Corp. it is low in speed than ADSL and a benefit that no splitter needs to be mounted users place.

G. Lite or DSL Lite G. Lite is another name for low speed ADSL and doesnt require users end line splitting but does it at the telephone company. The data rate varies from 1.544 Mbps to 6 Mbps downstream and for upstream 128 Kbps from 384 Kbps. It may become most installed device (Wolf  Zee, 2000).

HDSL High bit-rate Digital Subscriber Line (HDSL) is an initial type of DSL, used for wideband digital broadcast inside a commercial location and among the telephone corporation and a consumer.
IDSL ISDN DSL (ISDL) is said to be fake as its very close to ISDN providing the rates up to 128Kbps.

RADSL RADSL (Rate-Adaptive DSL) is technology of ADSL Westell, it has software that is capable to conclude the amount of signals that are broadcasted on a certain consumers phone line and regulate the release rate consequently.

SDSL SDSL (Symmetric DSL) is like HDSL only one twisted-pair line.
UDSL Unidirectional DSL (UDSL) is a suggestion by a European corporation. It is a one directional adaptation of HDSL.

VDSL VDSL (Very high data rate DSL) is emerging equipment that assures much superior information rates over comparatively small space (almost 51 and 55 Mbps on lines up to 1,000 feet or 300 meters in measurement lengthwise). Many standard associations are operational on it (Held, 2000).

Usually when a user connects to the internet the commonly used devices for connectivity are regular PCI modem, Local Area Network (LAN), cable modem or DSL. But DSL is becoming popular due to certain advantages.  If the user receives a call from telephone line the internet does not needs to shut down and can remain open. So this feature of simultaneously using the same line for different purpose is the main and highlighting feature of DSL (Bourne  Burstein, 2001).

The other feature is that DSLs speed is much faster than an ordinary modem, so any connection is done at high speed, it also does not require any new wiring unlike other methods which need a whole new setup of devices and wires it uses the same copper phone wire used for call making (summers, 1999).

Also the company provides the broadband modem which saves the effort of finding a separate modem and with those certain specification demanded by the company so no extra cost is required in purchasing the modem (Wolf Zee, 2000).

DSL, at the present, is being used in the majority areas of the United States, in the United Kingdom and other countries too. DSL service distribution relies on how a local corporation has made the essential savings in tools and line service and how close the user is to the telephone corporation. DSL service providers in different parts of the United States comprise of Covad, Qwest, Primary Network, BellSouth, SBC Communications, and Verizon (Burns, 2006).

But the technical issues and regularity issues that arise with this are that a DSL connection would perform better if the provider of the company office is close to the users location. The more the distance from the providers office, the weaker the signal is received.

The other issue that is being taken under consideration by the researchers and developers is that DSL connection is brilliant for receiving data from a certain source but when it comes to sending data the performance is not very appreciable.

The other issue with this technology is this that it is not available everywhere. This is because every place is not supported by wiring some places do not have this facility of copper wire transmission to communicate so as this technology works on the same copper wiring therefore places which lack these wiring facility ultimately lacks  DSLs service too. DSL is an expensive manner of communication therefore is suitable for business or commercial use. On the other hand ordinary people who are not capable of affording an expensive mean of communication tend to use other network providing devices (Bourne  Burstein, 2001).

This technology is expensive but when it comes to the reliability and speed of networking household people are going to use this device for networking purpose while other business sector are already using this device. This same line usage is the best thing technology can come up with in IT and research sector (Wolf  Zee, 2000).

DSL is now getting quite popular in most regions especially Western Europe and North America. Therefore the market requires more value added and sophisticated application so that the business competition should increase. So, in future such strategies would demand fast internet connections hence DSL, in its many forms, would provide better utilization of copper wire network communications and would give speed up to 24Mbps using the ADSL and VDSL. This would make the users of DSL in future to obtain VoD, VoIP, and IPTV. This high speed internet would then quite fast and no time is going to be taken for any connectivity, however all the research that is going on is yet to be tested (Burns, 2006).

Conclusion
Therefore, from the details of both we can see that DSL is better if the person want to use the telephone and internet at the same time, which is not provided by ISDN. Also the speed of DSL is more than ISDN and therefore ISDN is regarded as the out dated nowadays. Very few people are using this mean of networking while others have shifted themselves to DSL or other means (Held, 2000).

According to a survey in 2005 the DSL subscribers for the speed of 2Mbps were almost 26 million, for speed of 2-10Mbps the customers were almost 28 million. However today, in 2010 the people who are using DSL on 2-10Mbps speed are 55 million while the customers who are using DSL above the speed of 10Mbps are approximately 60 million worldwide. So one can understand which choice is better if a good connectivity is required. ISDN is now old fashioned but before DSL its importance was quite a lot but as the time and technology changes the previous inventions get less importance. So this is truly one of a great device and mean of communication.

New Materials

New materials are being invented each day which are changing the lives of people. These new materials have revolutionized the world and thus are changing our lifestyles. Moreover these new materials have brought numerous benefits to the society and thus have become an indispensible part of our lives. The material world has progressed immensely and thus we are now on the forefront of this rapidly changing world. Radical material advancements have driven the creation of new and innovative products or even new industries.

Composite materials are basically engineered materials which are made from two or more than two constituent materials. In the modern world composite materials have gained a lot of popularity and acceptability in the manufacturing of high performance products. These products require strong and lightweight materials and thus composite materials fulfill those requirements effectively (Anonymous, 2010). Aerospace is one such industry where composite materials are in high demand and thus have revolutionized their products. Modern airplanes use composite materials extensively which enables them to lower their weight and thus increase the efficiency. Apart from this, airplanes made from composite materials are very much strong and thus can sustain harsh conditions.

Furthermore, composite materials are used extensively in the production of space crafts. These crafts have to sustain the harsh and unfriendly environment of outer space and thus need to have a strong and lightweight body. Thus composite materials properly cover those needs. Moreover composite materials have a longer life span as compared to iron or steel and thus are more suitable for building airplanes and spacecrafts. The only disadvantage of using composite materials is their high cost.

Although composite materials are expensive to make and their manufacturing process is time consuming but their advantages outweigh their disadvantages. Thus these composite materials have revolutionized the aviation industry and have led to the creation of newer airplanes which are lighter and more efficient than before.

Glass reinforced plastic or GRP material is also a composite material which is used in the manufacturing of doors. Inheriting the general properties of composite materials, these doors have high strength and light weight which makes them the ideal choice for being used as an entrance door.  These doors also offer adequate level of security and thus serve a double purpose. This is another illustration of composite materials being used to make new and efficient products.

Liquid Crystals basically are state of matter. These special crystals have mixed properties of both the conventional solid and liquid crystals. Moreover, these crystals have been most commonly used in the modern electronic displays known as LCDs. LCDs are very much common in our modern world. From televisions to mobile phones, one can see LCD as being the primary display device. These devices have revolutionized the television industry. Gone are the days when we used to have bulky old television sets which would give blur and fuzzy pictures. LCDs have reduced the size of the television to a photo frame. This has made the modern LCD based televisions both convenient and stylish. Apart from their overall look, LCD televisions are also very much bright and sharp as compared to their conventional counterparts.

Furthermore, LCDs are also used extensively in mobile phones. These phones offer different kinds of games and movies which are only possible due to their LCDs. Thus LCDs hold an integral part in all these devices. Apart from that, these devices hold a special place in the instruments panel of most of the modern airplanes. This allows the pilot to get the latest information.

New materials also help in the saving of the environment. Many of the new materials are environment friendly and thus help in the conservation of environment. These new materials are thus being used on a wide scale as the worldwide awareness towards environment increases. Moreover as the world is getting more educated towards the ozone layer issue and the greenhouse effect thus such materials are very much in demand now.

Many new materials are replacing the CFC gases which were used in aerosol sprays. These gases are very much harmful to the ozone layer and thus make the ozone layer thinner. Thus new materials have replaced these harmful gases and are being used as their alternatives. Apart from this many materials have the properties of being recycled. This allows them to be reused a number of times and thus saves the environment. Aluminum is one such material which can be easily recycled. This allows the aluminum to be used a number of times. Such a practice helps in the conservation of the environment.

Similarly glass materials can also be recycled through proper system. Recycling glass also helps in the conservation of environment. Moreover individuals can save costs and thus benefit from the recycling. Plastics materials have revolutionized the way things have been made. These synthetic materials are used in a number of consumer and industrial products. These materials have changed our lifestyles and increased our living standards. By just taking a casual look around one is able to see how has plastic taken over our world. Moreover the uses of these materials are increasing with each passing day.

By just taking an example of an automobile one can see that the automobiles of the 60s and 70s had very little plastic components. The bumper was made of metal and the dashboard also had very few plastic components. If we compare those automobiles with the current production models one is able to see that the majority of the interior parts are made up of plastic. From the dashboard to the door handles, everything is made up of plastic.

Apart form this looking at the exterior one is also able to see that from the lights to the bumpers, plastic has taken over everything. Plastic materials have also decreased the overall cost of production for a variety of items and thus have made them affordable to the masses. Apart from this plastic materials are very much durable and long lasting and thus can be used over a long period of time. With each passing day, more and more devices are being made from plastic. The ease of customization and the durable characteristics make the plastic materials an ideal choice for a variety of purposes (Mishori, 2008). The early plastic materials had invited various criticisms regarding their environmental impact.

It was said that plastic materials with a number of advantages were not environmental friendly and thus were very difficult to recycle. Moreover when those materials were burned they produced poisonous gasses which were harmful to the environment. Those criticisms were addressed by the introduction of biodegradable plastic. These plastics are made from specific materials and are very much environment friendly. They act as normal plastic materials but when they are left in the natural environment, these materials decompose naturally and thus do not harm the environment.

Furthermore certified biodegradable plastics combine the utility and properties of normal plastic materials such as being lightweight, strong and having low cost but are able to decompose naturally at proper disposal sites. Thus these kind of plastic materials make up the ultimate combination and have a bright future. Fire retardant materials are those materials which are able to resist burning and withstand heat.

Such materials are very helpful in harsh and extreme conditions and thus are mostly used by firefighters so protect themselves from the fire (Berger, 2007). Moreover these special materials are also used to produce fire retardant fabrics. These special types of fabrics offer protection against fires and thus have a number of uses (Indian Patents News, 2010). Such fabrics are used in places where there is a fire hazard and thus proper safety measures are needed to be in place.

For instance in various public places such as schools, libraries, hospitals etc one can see these fire retardant fabrics being extensively used and properly applied. Especially the curtain fabrics are made up of fire retardant materials so that to minimize the fire hazard. Non conducting materials are those materials which do not allow the passing of charge and thus they resist the flow of electrical current and heat. Such materials are used in a verity of places. Electrical insulators are one such example in which non conducting materials are used. These insulators resist the flow of electrical current. Such insulators are used for wire coating so that the current does not flow outside the wire.

Electrical insulators are used in a number of home based devices to save the user from the electrical shock. Glass, Teflon and paper are some of the examples of electrical insulators. Moreover plastic materials are also used extensively as electrical insulators. As with any material, these insulators have a certain limit to which they can withstand the electrical current. Once that limit exceeds they do not stop the electrical charge from flowing outside and harm the user.

Thus each electrical insulator has a certain limit and thus they are used according to the requirement. For high voltage certain insulators such as liquid insulator oil is used to prevent electrical sparks. Apart form this other materials that are used to insulate high voltage systems are glass of ceramic based materials which prevent the flow of current to move outside.

Another type of materials is known as semi conducting materials. These materials can be controlled to allow a limited passage of charge or current (Fishlock, 1967). By having a controllable system, numerous functions could be performed and thus properly executed. Thus the applications of such materials are endless. Popular uses of such materials are in transistors and lasers.

Transistors are semiconducting devices which are used to change the electrical signals according to the requirement. These devices form an integral part of the modern electronic devices and thus without them the electronic devices would be useless. Computers are one such example where transistors play an active role and thus hold a central position. Transistors are also used as amplifiers to increase the voltage and thus are again used in a number of devices. Lasers also work on the principles of the semiconducting materials and thus perform a number of practical purposes. Medical lasers are used to perform complex surgeries and therefore act as accurate tools with precision skills.
The material science has become so advanced that one is able to see new materials being made with extra strength. The research in these materials has made some incredible advances and thus we are now able to take advantage of these super strong materials. Super strong plastics are an example of such materials. These plastics can be used in the places which require extra strength and durability. Moreover with these plastics being light weight in nature thus they offer the best possible solution and therefore are being used in a variety of places and products. Many new materials perform a number of safety related tasks and thus are being used extensively in many key areas. Bullet proof materials are very tough and rigid. By using those materials to make up special cloths and vests, they can be used to prevent the bullet from passing through. Therefore they help to save the user from serious injuries or death. These materials are used specially by the law enforcement agencies and the militaries around the world.

The same kind of materials can also be used to make bullet proof glasses. These special glasses are transparent like ordinary glass but have extra ordinary strength. This strength is particularly useful in preventing a bullet from passing through. As the technology is improving with each passing day one is able to see the development of one-way bullet proof glass. This special glass is able to stop a bullet from penetrating it from the outside but at the same time the person inside can fire back and thus the bullet will pass through the glass.

Such extra ordinary qualities of this glass make it an ideal choice for being used in armored cars. These cars are used by security companies to transfer the cash from different locations or by the law enforcement and military agencies. While discussing the security applications of new materials one can also discuss the benefits of these materials while securing the bank vaults. The bank vaults are used to store valuable items and cash and thus need to be highly secure in order to prevent a break-in.
Therefore recent advancements in the material technology have brought new materials which are able to secure the bank vault thus reducing the chances of a burglary. For instance fire-retardant materials are used in the bank vaults to prevent someone from using fire to open up the vault. Apart from that materials having strong strength are used in the construction of these vaults to further secure them against any forced entry.

These materials have changed our living standards and thus have enabled us to enjoy a variety of luxuries. From the mobile phones that we use to the cars we drive one is able to see how materials are working in connection with each other to perform numerous tasks. Environmentally speaking these new materials are helping the conservation of the environment. Moreover as the world is getting more and more populated these materials are enabling us to increase our food productivity and thus enable us to feed our future generations. Genetically modified foods are one such example which can enable us to increase our food productivity.

These crops are able to produce more than the conventional crops (Financial Times, 2010). By transferring genetically modified materials to the crop one is able to remove the natural defects within them. This can ensure that the crops would be error free and thus will have zero defects.

Other than this, the genetically modified crops hold a lot of potential for the third world countries which are facing huge food shortages (Salem, 2010). These crops can ensure that everyone in this plant has something to eat and no one dies of hunger and malnutrition.  Future developments in the genetically modified organisms promise a number of applications. For instance the banana plants would be able to produce vaccines for diseases such as Hepatitis B. Apart from this the genetically modified organisms also promise better varieties of fishes which would be able to grow quickly and newer varieties of plants that would be able to produce fruits and vegetables in a short span of time. Such radical inventions are possible due to these genetically modified organisms. These organisms also hold the potential of eradicating the various diseases that plague the planet. Through proper research and development many new species of crops and plants could be made which could help develop solutions to a number of problems. The future is full of hope and anticipation and these new materials could help to fulfill those aspirations. The current pace of development and the research and development that is going on in these new materials will allow many different kinds of materials to be developed.

Other than this these advancements also promise newer type of trees which would be able to produce new varieties of plastics which would offer certain type of benefits. Thus the overall benefits of such materials are far reaching. Currently we have only touched the tip of the iceberg as these new materials promise to take us into a future where everything is possible. These new materials will be the building blocks on which future developments would occur. Moreover these new materials would allow the development of various new technologies and products which would help us live a more relaxed and better life.

Unclassifiedcommercial electronic assault technology

Electronic harassment and control technology was started in 1950s, as a branch of the CIAs MKULTRA project group. Just as organized crime is not stopped by hearings and court cases, neither did this originally obscure branch of MKULTRA activity, when the institutionaldrugchild abuse phases were exposed by the U.S. Senates Church-Inouye hearings in the late 1970s. No criminal proceedings followed, and only two civil law suits (Orlikow and Bonacci) have succeeded. There have been incidents of use of unclassified diagnostic clinical equipment on a person without contact. The face and eye human patent models have been used. These are based on remote sensing and monitoring systems. Persons medical information has been screened on that diagnostic face and eye so that remote person can go back and see through another person vision.

Technologies used in Unclassified Commercial Electronic Assualt
There are following main transmission methods for Neuro-effective signals
Pulsed microwave (i.e. like radar signals)
Ultrasound and voice-FM (transmitted through the air)

The transmission of speech, dating from the early 1970s, was the first use of pulsed microwave. Neuro-effective signals were unknown type. Many other nerve groups can become remotely actuated through the neuro-effective signals in the present scenario.  Brain entrainment is a technique that is used by inducing relaxation or sleeping, making a person susceptible to hypnosis.

Patent Human Model of Face and Eye Apparatus
This is done through computer systems and can or can not be voluntary. The common example of the same is the invention known as The Cyber link Mind Mouse. This was a revolutionary hands-free computer controller that allowed the user to move and click a mouse cursor, play video games, create music, and control external devices, all without using his hands. This was based on the sensors of electrical signals generating from the facial muscle, eye and brain activity. However head band was used for it.

However the different eye trackers measure rotations of the eye in one of several ways but principally they fall into three categories

An attachment to the eye, such as a special contact lens with an embedded mirror or magnetic field sensor, and the movement of the attachment is measured with the assumption that it does not slip significantly as the eye rotates.

Second broad category uses some non-contact, optical method for measuring eye motion
Third category uses electric potentials measured with electrodes placed around the eyes

Figure 1
INCLUDEPICTURE httpupload.wikimedia.orgwikipediaen220Eye_movements_of_drivers.jpg  MERGEFORMATINET

Eye tracking is commonly used in a variety of different advertising media. Commercials, print ads, online ads and sponsored programs are all conducive to analysis with current eye tracking technology. This may or may not involve the user awareness.

The LIDA Machine
In 198283 Dr. Ross Adey has developed a LIDA machine. This machine transmits pulses of 40MHz radio signal at pulse rates designed to match relaxed and sleeping states originally.

Figure 2
INCLUDEPICTURE httpwww.bibliotecapleyades.netimagenes_sociopoladeylida.gif  MERGEFORMATINET

The manual says it is a distant pulse treating apparatus for psychological problems, including sleeplessness, hyper-tension and neurotic disturbances.

CONCLUSION
This is clear that electronic assault is possible through the modern technology. There are various instances where people can be assaulted and identity can be used or misused. The risk of the theft of biometric information can be devastating.

LEADERSHIP MODELS AND ORGANIZATIONAL COMMITMENT

A great deal of research has been conducted examining the leadership styles of managers and administrators. Much research has also been done to measure the organizational commitment of employees however, is known about the relationship that exists between these two variables.  This study aims to fill this gap by investigating the relationship between managers leadership style and employees organizational commitment.
       
The employees are the organizations front liners. They are the key players in making sure that the goods are delivered to the clients. Moreover, it is important for the managers to be aware of the subordinates degree of organizational commitment so that they may look into possible ways of strengthening the commitment.

Lastly, data from this research may contribute to the existing literature on the relationship between leadership styles and organizational commitment.

Introduction

Leadership begins with character.  The inner and personal attributes such as honesty, willingness to serve, recognition of others good deeds, care for the human person and identification with the larger goals of the organization all contribute to the making of a leader.  The leader is admired, respected and trusted by his followers because his character is deemed worthy of emulation (Bass and Avolio 1994, 3).

But times have changed.  In an age of accountability, it takes more than a respectable character to run an organization.  Gill (2006) asserts that character traits appear to be enabling rather than determining.  To believe that universal traits define a leader is nothing more than a return to the antiquated trait  theory of leadership which suggested that special traits and personality set leaders apart from non-leaders (Gill 2006, 41).   A leader must think in terms of performance, not personality.  It is not so much what he does, but what he achieves that matters (Reddin 1970, 3, 9).

A leaders effectiveness is measured by the extent to which he influences his followers to achieve group objectives (Reddin 1970, 8).  Leadership effectiveness is seen in relation to a specific situation.  A leaders ability to adapt his leadership style to the demands of the situation is a function of his effectiveness.  His effectiveness is crucial to the success of the organization.

I.  Leadership in Organizations

According to Stogdill (1974) Leadership is a process of interaction between persons who participate in goal oriented group activities.  Stogdills concept of  leadership leads to three assumptions.  First is that leadership is a function of an individual.  Second, leadership is an aspect of group organization.  Third, leadership is concerned with attaining objectives (Stogdill and Shartle 1974, 287). These objectives set the direction of the organization, making leadership a crucial function (Simon 1948, 9).
The earliest systematic research on leadership identified traits of great leaders.  It assumed that great leaders were endowed with universal characteristics that set them apart from non-leaders. Around the 1930s to the 1940s, however, several studies altered the view on human traits as the basis of leadership. Researchers began to look into leader behavior as the basis of leadership.

Leadership Theories
In as much as the behavior of leaders in an organization is crucial to leadership effectiveness, evidence from research clearly indicates that there is no single all-purpose leadership style (Korman 2006, 349-361).  Successful leaders are those who can adapt their behavior to meet the demands of the environment. The situation is an important factor which contributes to the leaders effectiveness. This discovery led to the development of Situational or Contingency Theories in the late 1960s. Contingency Theories suggest that there is no one best style of leadership.  Successful leadership is dependent on the nature of the situation and the followers (Fiedler 1967, 12).

Fiedlers Leadership Contingency Theory
Fiedler pioneered the study of contingency theories in the 1960s.  His Leadership Contingency Theory asserts two basic assumptions.  First, the Contingency Model suggests that a leader has either a Relationship Oriented Style or a Task Oriented Style.  Second, the three most important situational variables interacting with a leadership style are (a) Leader-Member Relationships, (b) Task Structure (c) Formal Position Power.  All three variables have an impact on the degree of control of the leader.

A few of the criticisms of Fiedlers Theory (1967) have been the difficulty in assessing its variables and the little attention it gives to the characteristics of its subordinates (Walter 1969, 33-47). Nevertheless, his work paved the way for a more adequate analysis, not only of leadership effectiveness, but also of both the situation and the organization.

Hersey and Blanchard Situational Leadership Theory
Hersey and Blanchard developed the Situational Leadership Model in 1996.  This theory assumes that the leadership style  behavior of the leader as perceived by the followers  can be classified into task behavior and relationship behavior.

Task behavior is the defined as the extent to which the leader spells out the duties and responsibilities of an individual or group.  This behavior includes telling people what to do, how to do it, when to do it and who is to do it.  Relationship behavior, on the other hand, is defined as the extent to which the leader engages in a two-way or multi-way communication.  This behavior includes listening, facilitating and supportive behaviors (Hersey and Blanchard 1996, 191).

The biggest contribution of Hersey and Blanchard (1996) to the understanding of leadership is the importance given to ReadinessMaturity level, defined as the followers willingness ability to accomplish a specific task (Hersey  and Blanchard 1996, 193) as an important situational factor that determines the effectiveness of any leadership style.

William Reddins 3-D Theory of Leadership Effectiveness

In 1970, William Reddin developed the 3-D Theory of Leadership Effectiveness. This theory was chosen for this study because it attempts to put together theoretical bases previously mentioned such as leader traits, leader behavior, leader-follower relationships and situational factors. The 3-D Theory differentiates itself sharply from most behavioral theories in the centrality it gives to Effectiveness.  This theory suggests that the prime purpose of any leadership action is to improve effectiveness (Reddin 1970, 182).

House-Mitchell Path-Goal Theory
Evans (1970) put forth that leadership serves two important functions  path clarification and rewarding.  House (1971) expounded this idea and together, they developed the Path-Goal Theory. They contended that a more comprehensive theory must be able to recognize at least four distinctive types of leader behavior.  There are  1.  Directive Leadership that provides specific guidance and clarifies expectations, 2.  Supportive Leadership that shows concern for status and personal needs of the subordinates, 3.  Achievement Oriented Leadership that sets challenging goals, seeks performance improvement and emphasizes excellence, and 4.  Participative Leadership that consults with subordinates in decision-making (House 1971, 324-325).

The two situational factors that mediate between leader behavior and subordinates outcomes are  (a) follower characteristics and (b) environmental factors.


Transformative Leadership
Bass (1985, 3) expounded the idea of transformational and transactional leadership and characterized transformational leaders as harnessing in their subordinates the capacity to perform beyond expectations. Transformational leaders motivate and stimulate their subordinates to transcend from their own personal interest for the greater good of the group, organization or society.

Transformational leaders tend to use one or more of the four Is  individualized consideration, intellectual stimulation, inspirational motivation and idealized influence.  As a result, subordinates want to meet the expectations and display commitment, not merely complying to the vision, mission and tasks (Bass and Avolio 1994, 27).

II. Organizational Commitment
Barnard (1938) defines formal organization as a system of consciously coordinated activities of two or more individuals who are  1. able to communicate with each other 2. willing to contribute action and 3. accomplish a common purpose.  Thus, an organization, simple or complex, is always a system of cooperated human efforts guided by a purpose and a personal willingness to contribute to the organization.  (Barnard 1938, 73,82).

The capacity of any organization to thrive is dependent upon the willingness of its members to effectively contribute to organizational purpose. The members willingness and contribution are likewise dependent on the satisfaction they get from the organization.  If the satisfaction outweighs the sacrifices, then, there is organizational equilibrium (Barnard 1938, 82-83).

The Theory of Organizational Equilibrium is essentially a theory of motivation  a set of conditions that can encourage the members to continue membership in their present organization.  This theory reflects the organizations success in arranging payments to its participants to induce their continued participation in the organization (March and Simon in 1958, 84-93).  

Becker (1960) explains commitment as a consistent behavior a disposition to engage in consistent lines of activity as a result of the accumulation of side bets that would be lost if the activity were discontinued. Consistent lines of activity refers to the decision of the individual to maintain membership in his present organization, side bet, on the other hand refers to anything of value in which the individual has invested that would be lost or deemed worthless at some perceived cost to the individual if he decides to leave the organization.  The perceived cost of leaving the present organization may be magnified by a perceived lack of alternatives to replace the forgone investments (Burns 2004, 33-35).

Kanter (2006, 504) describes cognitive-continuance commitment as that which occurs when there is profit associated with continued participation in the organization and a cost associated with leaving the organization. He likewise defines cohesion commitment as the individual fund of affectivity and emotion to the group.

For Stebbins (1970, 527), continuance commitment is the awareness of the impossibility of choosing a different social identity because of the penalties or inconveniences that will result from this choice. Kanter (2006, 26) on the other hand defines commitment as the pledging or binding of oneself, as in committing oneself to a course of action.  Extensive experiments he did on this subject identified three variables, namely (a) extremeness of attitudes, (b) familiarity with attitude issue (depth of knowledge) and (c) social support for attitude (degree of affiliation with others advocating the same stance) (Kanter 2006, 27).  He hypothesizes that a person can be committed to a group either by membership (belonging to a group by a semi-formal way) or by reference. A reference is a group where the individual gains values, opinions and so forth (Kiesler 1971, 176).

Furthermore, the type of leader in this case cannot succeed on their own and so the authentic leaders must build an extraordinary support team that will help them.  This team contributes through counseling in times of uncertainty, offers help in times of difficulties, and celebrates in time of success.

Authentic leaders must know how to integrate and bring all together the elements needed for leadership.  Spending time with their family and friends, getting physical exercise, engaging in spiritual practices, doing community services, and going back to the places where they had grown are some components leaders consider that are essential to the effectiveness as a leader.  This must be one of the greatest challenges a leader must have to face.  Discipline is the key to overcome this problem of integration.

Leadership does not necessarily means success neither having loyal followers rather it is the proper empowering of the people in the organization.  Authentic leaders must know that the key to achieve the organizational goal is to empower people in all levels not only to give them inspirations, but to give them chance to step up and lead. And there would be no individual achievement can be matched to the pleasure given by achieving a worthy goal by leading a group of people.

The most prevalent approach to organizational commitment literature is one where commitment is considered as an emotional or affective attachment to the organization with which the employee identifies, in which he is involved and to which he enjoys membership in the organization.

Evans (2009, 533) conceptualizes commitment as a partisan, affective attachment to the goals and values of the organization, to ones role in relation to the goals and values  and to that of the organization apart from its purely instrumental worth.

Porter, Mowday and Steers (1979, 226) best represent the affective attachment approach by defining organizational commitment as the relative strength of an individuals identification with and involvement in a particular organization.  It can be characterized by three related factors  (a) a strong belief in and acceptance of the organizations goals and values (b) a willingness to exert considerable effort on behalf of the organization and (c) a strong desire to maintain membership in the organization.  Commitment therefore represents something more than loyalty to an organization.  It involves an active relationship with the organization where the employees are willing contribute and cooperate for the fulfillment of the organizational goals as evidenced not only in their beliefs and opinions but from their actions as well (Porter, Mowday and Steers 1979, 226).

Summary
The managers are faced with a myriad of responsibilities.  The time, effort and skill they put into their jobs are crucial to their effectiveness.   They should be aware of the employees level of organizational commitment so that they may explore possible ways of strengthening their level of commitment. The employees are the schools front liners. They are the key players in making sure that the goods are delivered to the clients. A more harmonious and cooperative relationship between the manager and their employees should be forged which is important for the smooth running of the organization.
The relationship between the superior and subordinate is one of the most important aspects in an organization.  Superiors and subordinates share a common vision, albeit different roles and expectations. This vision is articulated and spelled-out by the administrators while employees serve as a catalyst in the realization of this vision.

The subordinate, however, must first be able to bridge the gap between what he is asked to do, by the requirements of his job, and his own personal beliefs and values, before he is able to commit himself to the organization.   Commitment is forged when the ideals and values of the employee are congruent to that of the organization.  An employees active participation in the fulfillment of the companys vision-mission is determined by his commitment to the organization.