Terror on the Internet Wiemann Gabriel


The author of this book is a communication professor at a university in Israel thus by virtue of being at the center of terrorism-effected state he has come up in convincing manner to present troubling analysis of the phenomenal popularity of internet by pugnacious extremist organizations that seek to institute their message of hate and spur recruitment through global forums. Thus the theme of this account revolves around the capacity and buoyancy of terrorists in fighting their wars not only on ground but in the realm of cyberspace too. The very dominion of terror on the internet is an attempt by the author to answer the emerging and painfully dynamic arena. Thus by gliding through this topic he has tended to lay us bare and exposed to the burgeoning challenges that arise over the smoky scene of uncertainty and risk. Uncertainty that is the result of growing and intricate presence of terrorists on the sphere of Internet.  Terrorism is thus Wiemanns theme and its resulting ramifications is the concern he is tries to interrogate.

The research involved
Gabriel Weimann is an eminent and acclaimed analyst of the menace of global terrorism and its related mass media, and his credentials gets amplified by his knowledge in the field of communication as he teaches in Haifa. Since the start of 1997 he has not also instigated debate but lead a large research particularly based on studying role of internet in the realm of terrorism. To come out with authentic and convincing results catalogued and tracked multitude of websites that had been the tool of terrorist organizations. The research data not only encompassed those sites that directly had been involved as the mouthpiece of terrorism but he tended to document those that had been used as proxies by their affiliates to spread word of hate, indoctrination, recruitment, gathering critical mass support, appealing and collecting funds and to plan and synchronize their attacks. The breadth and depth of data and culmination of crucial lessons lead Wiemann to publish five books more then hundred chapters and articles for books, journals and magazines encompassing the topic of conflict, extremism, terrorism, hate debate, mass media, the usage of IT and the internet.

To come up with emphatic analysis he dug deeper into sphere of terrorism by not only siphoning and categorizing the type of usage by the terrorist by also by inspecting and analyzing the material of them. He tried to understand their motives and tactics by digging into the material pasted. To come up with the essence and commonality between all of these messages he applied multitudinous analytical approach on the all written, audio and visual substance displayed.

Questions
He elaborated underlying arguments during in seminars and workshops fellow-ship years that his research has tried to answer, in a way highlighting the focus of his endeavor through following questions

Who really are the terrorists on Internet As lumping all the struggling groups under one umbrella would be unreasonable for those having legitimate consternation. Where lie the fine line and who is the arbiter to lob good from bad

Spurring from first question is the second and more intriguing debate as to who should be lumped under the elucidation of being a terrorist. This perplexity raises concern for the world that has to come up with clear and consolidated definition of terrorism, one that is universally acceptable
How the terrorists and their organizations use internet Is there common pattern among all groups or its use varies across regions depending upon the type of message that is being given.

What are the different schemes and strategies of these organizations, what is the rhetorical maneuvering And does it change with the nature and ferousity of message
How does the free-world democracies respond to threats emerging from such Internet based wars

Sphere of research
Weimann has explained some of his research findings on internet by shedding light on the various channels and mediums that such shadowy organizations have used for nefarious purposes that include

Psychological warfare
Promotion and the co-occurring propaganda
Digging through the available data on internet (data related to government and public and orchestrating plans for executing their designs)
Meeting out to sympathetic and generating funds from them
Expanding sympathizers for support, seaving backers for funds and carving out collaboratorshard core advocates for recruitment, thus dilating the support
Coming out with channels of communications and networking with similar terrorist groups
Sharing their designs, information, expertise, instructions and techniques
Planning, coordinating activities
Waging an impactful cyberwar and the hanging cyberterrorism

Lope holes
Weismanns account overall has been strong and persuasive and this is because of the fact that he consistently backed his findings by investigative results of his exhaustive research. Nevertheless his accounts sometimes begin to loose sheen and vivaciousness, thus appears life less.
Cyberterrorism is one thing that vehemently dismissed as hallow threat although we lately have seen attacks those becoming more and more persistent on US government websites by vexatious elements mostly emerging from Asia. He rightly says that such organizations have nothing or little to do by abrogating sites or stealing data which to some extent lie in their incapacity to seep into or their lack of stakes yet such attacks are a reality. However it might require research of another level to dig into the patterns and source of such attacks but shrugging off their capacity to cause disrupt is no more impossible. Rather a simulation conducted by the United States Home Land security (2007) on a possible cyberterrorist attack on US northern electric grid showed that the team of hackers broke into grid operations and successfully shut it down. Thus wider loopholes are waiting to be abused it just the matter of time when and who would break in through the back door. Thus Weismann assertion that no terrorist has yet been able to penetrate into major western computer and causing major harm is itself refuted by their own homeland exercises that convey vulnerabilities.

As Robert H. Fort stated in the International Herald Tribune by saying Weismann by asserting too much article and stories from news papers and journals has rather made his book more of an academic report. Though theres nothing wrong with this approach but perhaps over burnt himself and his readers.

Fort goes further by saying that on the issues of Patriot Act and liberties, Weismann went over board by giving broader coverage to both the proponent and critics, while putting little or no serious analysis of his own.  

Could these be willful omissions or evasions
I would come on the question on biasness later but there exists some arguments that needs to be objectively examined. My first and foremost objection is the list of recommendations that Weismann put forward to curb the menace of cyber-terrorism for instance Modification of the Patriot Act for increasing the transparency which is followed by his statement that people would have to give up some liberties while accepting limited vulnerabilities. Dig into every speech US President gave to his people and the free people of world after 911 that this is not a war on America, its a war on our values, our principle of liberty and our free way of life. Thus by pulling our strings by own hands would be giving up and playing into agenda of those who wanted to suffocate peoples freedom, as protection doesnt warrant killing independence, rather it means more enlightenment. May be America failed where they succeeded and thus instead of confining itself it needs to go forward and spread refulgence to curb the dusk.

Weismann somehow came to this point by being protagonist for education and counterterrorism sites and the promotion of harmonious use of Internet for conflict management and resolution.
Then we come across the definition of terrorists. Who are they, is their cause legitimate, are their actions admissible, who is target and are they really the people with cause and not some stooges of governments sans agencies. Who is the arbiter, who is going to decide and is their any one to listen and implement if their cause is really sanctioned. Where lie the fine line between freedom fighter and the terrorist So many questions and very few answers. Weismann has cited several examples around 30 US Department enlisted organizations each having their own causes. Like the LTTE guerilla group fighting ethnic Tamils independence from Sri Lanka, to some they are fighting for freedom and tend to spread their message across the board for their just cause while for others they are labeled as terrorists because of their brutal tactics which they claim as weapon of last resort. Thus the whole context of discussions becomes doubtful and contentious if we first tend to clarify the fine line, and Weismann glided across the very discussion without goading the controversial yet crucial discourse.

Is he biased
Comprehensively speaking Gabriel has done commendable work in tracking down the true nature of terrorists footprint on the Internet. He began his research before the start of second intifada that is known of have blankly slanted the opinion poll against the Palestinians and more importantly before the 911 that turned international opinion against Muslims at large. As far as the Israel is concerned the place where author resided and conducted research the general atmosphere had been relatively peaceful and both the side Palestinians and Israelis had been moving towards the peace process. Point of telling all this detail is the fact that there didnt exist any compelling stimulation at that time which could have compromised the writers objectivity. Also in most of the cases he came up with both the proponent and critics on any certain issue for instance cyber-terrorism, and kept his own judgment private. This also gave potent message that Weismann by virtue of being an Israeli and thus prone to skepticism kept himself clean while elucidating different hues of terrorism. His choice of studying only those organizations 30 in total which were listed on the terror list of US State Department is a clever move that made the basin of his research nonpartisan. Having said he nevertheless commented and concentrated more on Islamic terrorist activities especially those emerging from the West Bank, Lebanon, Syria, Iran and the Greater Middle East and in general Muslim activities across the Asian, Central Asia and Africa, especially when he covers the Basque ETA, FARC guerilla organization of Colombia South America, from Chechnya, Sri Lanka, Afghanistan etc.

Most of his references his own articles from newspaper and journals had been around the activities of Hezbollah, Fatah and Hamas. The trio intrinsically against the existence of the State of Israel and thus thoroughly looked into by nature of being on and within the orders of Israel. There lies innately natural thrust by Weismann while keeping these organizations at the pinnacle of terrorism as the menace of Al Qaeda emerged almost 4 years after the research began. But we can brush this intrinsic inclination aside by keeping out to the fact that he himself had been conscious of this fact and vehemently attempted to keep the focus of his book universal rather then regional. As he admitted himself in a keynote at USIP (May 2004) that he didnt want to make this piece of research as regional handbook catering only to near Middle Eastern trouble making strands rather the nature and scope of his research and findings had been universal. And scholars believe that he did his work earnestly.

My take on conclusion
This undoubtedly is the most comprehensive book I ever read that is exhaustive, comprehensive, universal and yet eye opening. Weismann mentioned 4300 tracked websites that are run by terrorist organization though the number is quite small if compared to enormous websites which are up and still pouring in. Having said the number is still large enough to lure any person who gets into their trap and then their ability to keep changing their foot prints continuously to make them ghost sites is vexing. This along with large swathes of information had been an eye opener for me, as never before I imagined the presence this facet of terror infrastructure. His coverage of the subject in hands is splendid and his ending in caveats melded with recommendations is impressive. As he states that this is a psychological war that is being waged over the minds and hearts and their ever-increasing capacity to dodge on the internet resemble those virulent strains who mold and adapt to the changing environment and conditions. They have found a medium that they know is being extensively used by people of all age and races and they are learning and adopting to attack it. This makes Internet the most exposed and its users most vulnerable to their trap. Hes again right with all recommendations but alteration in Patriot Act and Home Land Security. Weismann direct audience is youth as they are the sect that are the most tuned to internet and they are the ones who the extremists are targeting. He rightly puts it this is the youth who gets trapped into their rhetoric so they must be taught the real truth behind those bars the truth about terrorism and their nefarious goals.

The study dates back to 1997 and a lot has changed since then. There are extended wars across the same region that initially had only been source of terrorist ideology but today it stands as the hot bed, a breeding ground of all sorts of terrorism. Most of it spurred by the actions of the same free world Iraq, Afghanistan, Chechnya, Lebanon and drumbeat against Iran, such a turmoil though intended to spread freedom and democracy in the greater Arab peninsula has rather proven to be bonanza for the same elements that merely relied on spiteful ideologies. Leaving aside the situation on ground, today the world needs renewed effort and commitments to check hate drift of contaminating cyber-terrorism.

Thus the way Internet is evolving with new trends, technologies and dimensions such that it is experiencing paradigm shifts in every three years there needs to be concerted effort in tracking this evolvement. There needs to be continuous research to equip the world with new challenges, this book laid the foundation stone of what could be next of the academic tryst.

Food Crisis

The humanitarian agenda has been dominated by the food crisis which has also been a global affair. Some years ago, the sky rocketing prices of commodities and there shortage was not attracting much attention even with frequent warnings form the aids groups. However, food riots in Africa, Asia, Middle East and the former Soviet Union has brought the crisis on the spotlight. In fact, recent statistics from the United Nations indicate that over 100 million are on the brink of starving. The food crisis is a consequence of a number of factors.

The major contributor of the food crisis is the rapid increase in the demand of protein rich diet like meat in both developing and developed countries. This implies that there is a high demand for land which will be used in the production of grain which can be used to feed cattle. To be specific, reports indicate that world grain production for animals have increased from a low of 10 to 50 since the year 1900.  More reports from the United Nations indicate that meat consumption accounts for 18 of the gas emissions which is way above what is generated by the automobiles.  Shocking statistics have also shown that humans do spend over one third of the grains produced to feed animals at the expenses of their fellow humans (UNCTAD, 2008). In response to this, scientists are offering some piece of advice that if humans can only choose to forego the consumption of meat for a period of twelve months, then this would translate to a reduction in the green house emissions by over 1.5 tons which will have a far reaching effect on the weather patterns and the food crisis. Scientists are also also expressing there concern where they cite that it makes no sense to produce only 100 calorie piece of beef after using 700 calories of grain.

Another major cause of the food crisis is the bio-fuels security act of 2007. The act saw much of the food crops being diverted to the production of the first generation bio-fuels. In fact, statistics indicate that in a year over 100 million tons of grains are diverted to the production of fuel. The impact of the use of the bio-fuels has impacted much on the developing and the least developed countries whose economies have been dominated by food imports from the well off countries. A research conducted by the World Bank indicated that production of bio-fuels in both Europe and the United States had a direct impact in the food crisis and the soaring prices.

Similarly, the food crisis can be explained directly or indirectly by the doubling in the price of fuel in 20072008. Indirectly, the increase in oil prices led to an increase in the cost of transport and the cost of inputs required in agricultural production. Directly, United States, European Union and Brazil responded by seeking  to cash-in on the increased oil prices where they went ahead to offer subsidies in the production of agro fuels at the expense of food production.

Trade liberalization also contributed to the food crisis especially in the developing countries. In that, the free market economies have translated the developing countries to debtor nations who often rely on food imports which are often bought at subsidized rate than what is locally available. The outcome of this is that the local farmers get discouraged and they shift to different activities. Coincidentally, even the subsidized food imports are also not forthcoming since there is a global shortage of food for humans.

Unpredictable weather patterns which are a direct consequence global warming is also another cause of the global food crisis. The current level of industrialization, green house emissions, and emissions from the automobiles has seen the depletion of the ozone layer. Plants are known to be highly sensitive to ozone levels which translate to low yields especially for cereals resulting to a drop in the food production.
 
The strategy of idling crop land also contributed to the food crisis. The United States is known to pay farmers to idle their land. By the year 2007, the amount of idle land was 8. In addition, some arguments about the causes of the food crisis cite that there has been an unprecedented growth in the population which has outdone the grain production. Research has shown that grain production increases at half the rate that of population growth. However, the world Hunger Program executive director is of the opinion that the food production has in the recent past made remarkable growth while population growth has been maintained at a minimum of only 1.14. He is actually of the opinion that there is plenty of food on the shelves but the purchasing power of individuals have been reduced due to inflation which has rocked a large percentage of the global economies.

Winners and losers
In the crisis, a number of parties are bound to benefit at the expense of others. For instance, multinationals which have mastered the art of international trade are to make the best out of the crisis. In that, they will monopolize the food production, processing and the distribution chain. In fact a number of the multinational involved in the sale and distribution of agricultural inputs such as fertilizer and seeds are already reaping from the crisis. For instance, Monsanto and Du Pont who are the largest seed companies reported an increase in their profit margins by a whooping 44 and 19 respectively. Their counterparts, Sinochem, who are involved in the production of fertilizer, reported an increase in profits by 95 in the 20062007 financial year.
 In the supply of pesticides, the same companies involved in the distribution of seeds are having an easy ride since they are commanding 84 of the global market. The key players in the seed and fertilizer industry are facilitating the mergers and acquisitions of small firms such that in the long run they will enjoy economies of scale which will ensure that they compete effectively in the global market.

On the other hand, the farmers are the main losers in the crisis since they will have to contend with purchasing their agriculture al inputs at inflated prices. The crisis will also lead to the violation of the rights of framers to keep native seeds. Consumers are also on the receiving end since they have no alternative other than to consume the genetically modified foods which have so far not been approved of their safety in some jurisdictions. This is due to the fact that over 80 of the commercial seeds available in the shelves today are genetically modified.

Solutions
Several solutions have been proposed to curb the food crisis. First, farmers need to focus their attention on the production on food crops for humans rather than using the food crops to feed animals at the expense of humans. Secondly, farmers should also take some time of from the production of bio-fuels. Governments should also consider scrapping of the subsidies associated with their production of bio-fuels.

Animals have been cited to be the largest producer of gas emission which depletes the ozone layer resulting to global warming thus it is recommendable that animal farming should be minimized. Finally, there should be a reorganization of the food distribution systems which are currently in place as well as strengthening the capacity of developing and the least developed countries.

Photovoltaic array (Olmedilla Photovoltaic Park)

Currently the most popular and the fastest growing method for electricity generation is the use of photovoltaic cells. By using solar energy, these solar cells convert solar energy into electricity. When explaining the term photovoltaic effect, we come across a word known as photovoltaic effect. In simple terms, it refers to the energy packets (photons) of light striking electrons and ultimately moving them into a higher energy state. The energy which is released as a result of this effect is known as electricity. Solar cells produce a special type of current known as direct current which can have multiple purposes. For example, it can be used for DC applications and for AC applications also. This can be done by using an inverter that is used to convert DC into AC. Photovoltaic cells are used as modules i.e. they are a collection of single cells which are used for providing power to the regional grid. Applications like electric cars, emergency telephones and remote sensing are now using photovoltaic cells. Solar panels or photovoltaic modules are a collection of single photovoltaic cells which are connected together to form an array. AN array can be used to provide large amount of electricity for various applications like they are sufficient to provide electricity to houses. These photovoltaic cells are very important and their significance can be highlighted from the fact that approximately 1,864 GW of electricity can be generated globally by 2030 using these solar cells.

OLMEDILLA PHOTOVOLTAIC PARK
 Olmedilla photovoltaic Park is the Biggest Photovoltaic Array Park in the world. It is located in n Olmedilla de Alarcn, Spain and was completed in year 2008, September. This mammoth cell park utilizes 62,000 flat solar photovoltaic panels to generate a massive 60 MW of electricity on a sunny day. This output of electricity is sufficient to lighten up approximately 40,000 homes. Using unconventional silicon based solar panels, Olmedilla produce enormous amount of electricity on every sunny every day (worldofphotovoltaics.com).

SPECIFICATIONS
Most commonly used photovoltaic cells are mono-crystalline silicon cells, are either 83W or 180W panels. ZED is one of the producers of solar panels and the particular specifications of 180W panel are
Maximum power 180W180Wp Dimensions 1581x809x50mm                                        
Number of cells (Pcs) 72Maximum power voltage (V) 36.31                                            
Maximum power current (A) 4.98Open circuit voltage (V) 44.97                                                
Short circuit current (A) 5.23Maximum system voltage (V) 1000                                           Temperature range -40C to 80CTolerance Wattage (e.g. -3C) -5C                                      
Weight per piece (kg) 16.3Length of cables (mm) 900mm                                                    
Cell Efficiency 15.2Module Efficiency 15                                                                  
Output tolerance -5Frame (Materials, corners, etc) Aluminum                    
Test Conditions 1.5 AM 100mWcm 25C(BedZED)

INSOLATION
The amount of solar energy received on a given surface area at a particular time is known as Insolation. It is normally expressed as average irradiance, watts per square meter (Wm2) or kilowatt-hours per square meter per day (kWh(m2day)) (or hoursday). Insolation level for photovoltaic cells is measured as kWh(kWpy) (kilowatt hours per year per kilowatt peak rating). insolation level is the largest when the sun directly faces the surface of the panel. To increase insolation of a solar panel, the panel can be placed at a specific angle based on the latitude of its location. Insolation can also be increased by solar tracking which utilizes maximum sunlight thereby increasing the total power output (Deciding the Direction and Angle of Installation, 2008)

SOLAR CELL TECHNOLOGY
Most common material used in Solar Cells is Single crystal silicon, however the efficiency of single crystal silicon based solar cells is limited to about 25 only. Due to its limited efficiency, research is being done to find more efficient materials. Finally the latest concept is of Quantum Dot Solar Cell.

Quantum Dot (QD) Solar Cell.
Conversion efficiency of photovoltaic cells can be increased by using two or more p-n solar cell junctions. For this a new concept, namely the quantum dot (QD) solar cell is considered below. The model proposed is based on a p-i-n cell structure. Higher internal quantum efficiency for carrier collected photo excited electrons occurs as a result of channeling the electrons and holes through the coupling. This effect allows separation and injection of the generated electrons and holes in QDs into an adjacent p- and n-regions with high efficiency.

SCHEMATIC DIAGRAM OF A PHOTOVOLTAIC CELL TECHNOLOGY
A photovoltaic cell works by absorbing the incident photons carried by the radiation from the sun. The impact of Photons excites the silicon electrons of the solar panel moving them into higher energy state. Once they fall back into their original energy state they release the corresponding energy .in the form of DC (Direct current). Inverter connected to the panel inverts DC into AC as household applications and most electrical equipments run on AC (Alternating current).This AC current flows through PV generation meter as well as to the load providing normal electricity for household usage.

This single PV array can be employed on a large scale like Olmedilla Photovoltaic Park to Mega Watts of AC electricity which can sustain small towns and even cities fulfilling all there electrical applications

Standard Test Conditions
The DC output of solar modules is rated by manufacturers under Standard Test Conditions .STC conditions are
Solar Cell Temperature  25 Celsius
Solar Irradiance (intensity)  1000 Wm2
Solar spectrum as filtered by passing through 1.5 thickness of atmosphere

Overall System Efficiency
A solar cells efficiency is given as the percentage of power converted (from absorbed light to electrical energy) and collected. It is calculated using the ratio of the maximum power point, Pm, divided by the input light Irradiance irradiance (E, in Wm2   or kWh(m2da ) under standard test conditions (STC) and the surface area of the solar cell (Ac in m2).

INCLUDEPICTURE DDocuments and SettingsAsad AliLocal SettingsMy DocumentsDownloadsNew FolderSolar_cell_files2b62c37074fcb786023421eca91f654e.png

MERGEFORMAT

According to STC specification
Temperature (T) 25 C     Irradiance (E)  1000 Wm2      Air mass 1.5 (AM1.5)

Calculation
For example        
T25 C, E1000 Wm2, Surface Area0.05m2, Efficiency12
Maximum Power produced  (12) (1000)(0.05) 1.2W or 1.210-3 kWh(m2day) (According to the formula mentioned above)
Power produced per day  1.210-3 kWh(m2day)
Power produced over a year (1.210-3 kWh (m2day)) (365days) 0.438KWyear
From a single cell of the above mentioned efficiency. Using the above mentioned formula, these calculations can be performed using a simple calculator.

Power output of photovoltaic cell is dependent on the intensity and the ambient temperature. Variations in these conditions results in different power output levels. Similarly for photovoltaic cells of different efficiencies their respective power outputs can be calculated .Based on the power output and application the particular photovoltaic cell is selected.

Peak-power Output
Photovoltaic cell operates over a range of Currents and Voltages. To determine the peak power output, we increase the resistive load on an irradiated cell continuously from zero (a Short circuit short circuit) to a very high value.

The output produced by high quality, mono-crystalline silicon solar cell, at temperature of 25 C is 0.60 volts open-circuit (VOC). The Cell temperature will be around 45 C, at 25 C air temperature. This reduces the open-circuit voltage to 0.55 volts per cell. Maximum power (with 45 C cell temperature) is achieved with 75 to 80 of open-circuit voltage (0.43 volts in this case) and 90 of short-circuit current. Lower-quality cells rapidly drop voltage with increasing current and could produce only 12 VOC at 12 ISC. The usable power output decreases from 70 of the VOC x ISC product to 50 or even as little as 25. The maximum power point of a photovoltaic cell varies with incident illumination.

Watts Peak
Photovoltaic cell output power depends on various factors that include the. Incidence angle incidence angle of sun so to compare different cells and panels we use Watts Peak. This refers to the output power (under standard test conditions) at an of 1 kWm2, a solar reference spectrum AM (Airmass air mass) of 1.5 and a cell temperature 25 C.

Technology Clusters

Technological cluster is the process of concentrating companies that are closely interconnected in terms of proving services and products of close range. Such clusters are developed to improve efficiency and are more cost effective than the single unit. This cluster model maximizes on the utilization of monetary resources e.g. less resources are spent in coordination communication and travel for everybody (George, 2008). The clusters also help in efficient use of human capital as well. Therefore, the impact of these technology clusters is not only on the economic arena only but has also significant impacts in social, institutional and cultural impacts as well.

The technology clusters can affect their surrounding in various ways. For example, many technology clusters have led to advancement in economies of various regions leading to change of social classes for example, the development of leisure classes within the society. Another socioeconomic impact that these technology clusters have is production of by products, pollution which can impact on the social life of its surrounding as well as exhausting the available natural assets of a region to the peril of the environment.

Another way in which technological clusters may impact on culture is the development of subcultures. This occurs when the culture within a technological cluster is adopted by the surrounding society in a way that it adopts it as its own culture which reflects the culture of the technology cluster. The impact may either enhance the culture of the society in a positive or negative way depending on the nature and values of the technology centre. The underlying aim of any attempt of technology clusters is basically to create organizational functionality through a well coordinated approach into its operations.

Strategic Management of Information and Technology


Intel is arguably a market leader in microprocessor technology. However, the development has been a result of numerous changes in its strategic platform and awareness on change in market preferences and technology. Leadership has also played a role in the adoption of Centrinos and Banias that have propelled Intel in a new strategic path. This paper will analyse some of the factors that have played a role in the success that has been attained by Intel.

Role of IDC as a Geographically Peripheral Centre
The Israel Development Centre (IDC) is home to some of the best computer specialists and software designers from Israeli Universities. It is noteworthy that though the IDC at Haifa played a role in the development of Centrinos and revolutionising microprocessor technology, the role played by  Intels Headquarter at Santa Clara cannot be downplayed. The IDC unlike the Santa Barbra headquarters is shown to be mainly involved in technological development whereas the headquarters was involved in ensuring that a business sense is incorporated into the developments made by Intel. It is also apparent from the number of failed or neglected projects that had been developed by IDC before the Bania that the IDC centre was highly innovative and aware of trends in technology development in its innovation. It is the IDC that anticipated the problem that would be caused by heat generation with the development of faster processors. This led to adoption of a somewhat different approach to microprocessor development by the adoption of an approach where the company paid more emphasis on the overall performance of the microprocessors rather than boosting speeds. This is reflective of the willingness to take a step back so as to analyze the implications of change in technology and take a path that others have not thought of. This brings out risk taking and awareness on development in technology as some of the attributes that have been central to the peripheral role played by IDC in the success recorded by Intel.

While other Intel development centres in Texas and California dropped their projects and started working on Pentium 4, the IDC kept working on Pentium III derivations. This is reflection of a willingness to better existing technology rather than just taking on a new approach. The work ethic and uncertainty associated with working in IDC has led to increase in the levels of commitment and input by scientists at IDC. The cancellation of the Chopaka and Timna project just before going to production served to fuel a sense of insecurity that has been channelled to creativity. The negative experience of the cancellation of the projects and the experience that the IIDC scientists gained from them played an important role in positioning IDC as one of the important research centres to Intel.

To fully appreciate the importance of the role played by IDC one has to consider the requirement in innovation. The IDC displayed resilience in that despite working conditions that did not relay a sense of security and cancelation of two major projects just before going into production, the scientists took these failures in stride and made the best out of their experiences. Relevance of innovation to development in technology is an aspect displayed by IDC that is in line with the need to consider relevance of any development to organisational needs to be considered for innovation.

External and Internal Forces Leading to the Bania Project
There are both external and internal forces that led to the development of the Banias project. Analysis of innovation shows that it is driven by internal factors, external factors and in some cases both the external and internal factors. The development of the Banias project was a result of both internal and external factors. A critical review of the conditions surrounding the project reveals that there was increased concern and awareness on the heat generated by high performing microprocessors at IDC. As machines were becoming smaller and customer needs for fast microprocessors increasing pressure on chip manufacturers, there was an increase in likelihood of heat generated by the microprocessors being problematic.

Another external factor that may have played a role in the development is increase in awareness on the need for developing energy efficient machines. The globe has been hit by an increase in awareness on the effect of global warming and the need to develop energy efficient machines. The Banias chip design is clearly a measure aimed at improving performance of the chips and their energy efficiency which brings out its relevance to increase in awareness on energy conservation by the market. Furthermore, speed was losing its efficiency as a differentiator thus there was need to develop other qualities that would ensure Intel maintains its competitive edge in the microprocessor industry.

A host of internal factors also played a role in the formulation and development of the Banias project. It is noteworthy that the Banias project is an extension to some of the projects and developments that did not make it to the production line. Existing knowledge on how micro-processors can attain high levels of performance while minimising their power consumption is an internal factor that triggered the Banias project. Furthermore, the failure of previous projects to make it to the production lines had a profound effect on the IDC considering their knowledge and interest on the heat generating effect of other fast processors. To put it plainly, the IDC had knowledge of a problem that had yet to be fully conceptualised by others, workers that had knowledge of the problem and a better grasp of how it could be addressed and support by their management. These factors played an important role in ensuring that the Banias project was formulated and implemented effectively. The background that the IDC team had in working on Pentium 3 processors and the failed project that were also aimed at improving the performance of the microprocessors chips also increased the likelihood of the IDC team adopting a project that was in line with their expertise.

Role of Different Levels of Management
Leadership plays an important role in providing a platform for innovation. In fact leadership is considered a key requirement in coordinating any innovative development. A critical review of the development of the Cetrino reveals that leadership played a central role at different levels of Intel management to bring the idea of age. Leadership played a pivotal role in shaping the operations of Intel and IDC through leaders like Otellini, Maloney, Perlmutter, Barret and MacDonald.

The operational managers played a role in coordinating the innovation activities. In particular Mooly Eden led the efforts to develop chipset codename Odem despite initial opposition by the California Chipset group. Another contribution made by Eden was in his role as a platform manager where he was required to coordinate across the Centrino platform to ensure that all components would be delivered. Eden is highlighted as having good people skills for he could confront the employees without offending them. Furthermore, his experience as an engineer placed him in a better position to handle his responsibilities as a platform manager. Guchman on the other hand believed that he could develop a more energy efficient and powerful processors than the Pentium 4 which eventually gave rise to the mobile segment.

A senior manager that played a role in the development of the Centrino is Chandrasekher. Chandrasekher was charged with heading the centralised planning and marketing group that was charged with a host of activities that ultimately played a role in the development of the Centrino. In general, Chandrasekher was charged with heading a group that would centralise marketing, provide an outside perspective and help determine what the market really cared for. As head of general managers and chief coordinator of market research activities, Chandrasekher played a central role in ensuring that the Centrino developed as a product that would meet the expectations and needs of the market. In general, the senior managers shaped the development of the Centrino by addressing various technological and marketing issues that would face it as a brand through coordinating meetings and facilitating sharing of ideas between the general managers.

The executive managers also played a role in the development of the Centrino. Barret as an example is credited for providing clarity and direction to both the Johnson and mobile team concerning the importance of the wireless feature that has played a role in differentiating the Centrino as a brand. It is noteworthy that such definitive leadership is important in ensuring that divisions prioritise their activities and deliver products or goals that are important to the overall development of a project. Moreover, Barrett and Otellini as executive managers showed much belief in the Banias project when they approved a 300 million budget to launch the Centrino. It is noteworthy that the level of belief that leaders bestow on their employees goes a long way in affecting their motivation which is critical to innovation and meeting deadlines for product development. Leadership played a central role in the success of the Banias project and the resulting Centrino brand.

How Banias Changed Intels Core Microprocessor Roadmap
Strategic roadmaps are often developed in response to market needs. In general, the view adopted by a firm though affected by the actual expectations and needs of the market are affected by the variables that a firm considers a being critical in predicting market trends. It is therefore not surprising that the strategic plan before the adoption of the Banias was aimed at developing high performing microprocessors with little emphasis on ensuring energy efficiency. It is noteworthy that the strategy is basically aimed at addressing the speed needs of the market. In general, Intels strategies are developed with the market in mind though it had failed to incorporate energy efficiency in its initial strategic roadmaps. More emphasis was being placed on the development of fast processors with little efforts being channelled on addressing their energy rates of energy consumption. A critical review of the initial strategic roadmap reveals that there was emphasis on the market needs and little consideration on the environment and energy. Energy conservation is an issue that is first gaining relevance in the modern society and the failure by the previous strategies to effectively handle this issue would have been detrimental to Intel. Banias changed Intels microprocessor roadmap by integrating an aspect of energy efficiency that lacked in the previous roadmaps that had been adopted by Intel. It is noteworthy that the initial strategies relating to improving microprocessor speeds are still being sought though there is an element of energy efficiency that has been incorporated. The new strategic roadmap ensures the development of products that are not only relevant to the speed requirements of the market but also address energy requirements which is fast gaining relevance in different markets. Another effect of the Banias on the strategic roadmap adopted by Intel is integration of development in mobile and desktop technology. The Banias and the resulting Centrinos aided the adoption of an integrated approach to innovation which helps in improving the efficiency of both mobile and desktop processors whereas improving the penetration of Centrino as a brand name. It is noteworthy that the energy efficient processors can be used by both mobile and desktop technologies which bring out the integration aspect.

Implications for Pursuing a Platform Strategy
There are various requirements that Intel will have to port in its operations to fully support the adopted platform strategy. First, it is important to note that the platforms strategy was a move aimed at dealing with multiple practical issues that hindered the success of Intel as an organisation by affecting the level and nature of interaction between employees. Adoption of a platform strategy was a move that was clearly aimed at improving internal system by differentiating Intels operations into three groups. It is noteworthy that the platforms approach is defined by highly specialised role and development of groups or teams that have clearly defined roles for instance the centralized planning and marketing group. Such specialisation is important in improving the level of efficiency displayed by the different groups and Intel as a whole. However, to ensure that the platform strategy is well supported Intel must incorporate a myriad of strategies aimed at improving its communication systems. Furthermore, interaction between the different platforms has to be strengthened to improve the level of sharing between different groups. Though the groups are different they are all aimed at developing Intel and Centrinos as brand names. For the different groups to maintain focus Intel must ensure that it communicates a strong vision and develops strong leadership. The existence of multiple divisions requires strong leadership to ensure that each division is aware and appreciative of Intel goals and strategies.

Conclusion
The development of the Centrino into a world renowned microprocessor is a development that reveals a number of positive strategies and systems that were developed and implemented by Intel. The good innovative spirits and commitment displayed by the IDC, the good leadership and willingness to take risks displayed by leaders at different levels of Intel and appreciations of the need to be in touch with the needs and expectations of the market all come out as contributing variables to the success that has been recorded by Intel. It is evident from analysing Centrinos development that innovation within organisations requires considerations on the needs and expectations of the market, awareness on the external environment and a host of internal measures to be effectively carried out.

Emmett Chappelles Biography

Emmett Chappelle was born in October 25, 1925 in Phoenix Arizona. He conducted his undergraduate studies in the University of California, Berkeley where he graduated in 1950 with a degree in biochemistry. His academic journey did not end here for he earned an M.S. in Biochemistry from the University of Washington in 1954 after his four years at Meharry Medical College in Nahville, Tennessee where he was serving in the position of a biochemistry instructor. For four years (1955-1958), Chappelle worked at the Stanford University as a research associate. His interest in biochemistry grew even further after he joined the Institute of Advanced Studies at Stanford University where he served for five years as from 1958 to 1963. None of his years seemed to go without being involved in research for from 1963 to 1966 he was a biochemist at the Hazelton Laboratories before taking the position of exobiologist followed by that of an astrochemist.

Chappelles entry into NASA came in 1966 at NASA Goddard Space Flight Center to give a hand in manned space flight project. Chappelle is well known for his pioneering work in food and water biochemistry after his discovery of the ingredients that are present in all cellular materials. This was the basis for his development of the techniques currently used in analysis of bacteria in foods, water, urine, cerebrospinal fluid and blood. Never allowing technology to leave him behind, Chappelle switched his research into laser technology by developing the Laser-Induced Fluorescence for remote sensing of vegetation health in 1977 at the Beltsville Agricultural Research Center.

Chappelle retired from NASA in 2001 but he is a member to various chemistry societies such as the American Chemical Society, American Society of Biochemistry and Molecular biology among several microbiology societies.
With the new Information Age the information seekers rely on several search engines for the information like the Bing, Yahoo, Google etc which provide lots of information we need and unwanted also. Many sites are fake and provide the information which may look true but is not true. Due to these reasons searching online information and materials has become a difficult task for each and every web searchers and the search has become critical as it may or may not provide reliable and accurate info. This is effecting the society very badly as many people just search for material and will not consider the facts regarding its correct or not. Many a times the information provided by the web is misleading and unbelievable which can be assessed by viewing the below instances.
                           
Google is a World Wide Search Engine which provides all types of search results to a common man as well as well qualified person and has made search convenient and easy and provides immense knowledge. There are several Google Technological products like You tube, Orkut, Google chrome, Gmail, Gtalk etc which made life easier. Most of the time they are very useful at the same time many sites and content provided is misleading and false which many searchers cannot find out until they are cheated with the information provided which  is baseless .Getting the correct and required content has become a difficult task. We can see several instances wherein a website is fake and the content is absolutely irrelevant and baseless.Let us assess the below instances.
                           
In  the Google Technology website we have a lot of information about the ease and speed with which it conducts its operations with pigeons wherein  Google manages to find results of every query quickly by using Pigeon Rank a Google search Technology a system for ranking web pages. Google founders Larry Page and Surgey Brin reasoned and found pigeon clusters can compute the relative values of web pages faster than human editors. Engineers work to improve every aspect of daily services and Pigeon Ranks provide basis for all web search tools. Pigeon Ranks recognize objects regardless of the spatial orientation as they are superiorly trained to easily distinguish the items with minute differences which are an ability which enables them to select relevant websites from thousands of similar pages. The ease of training pigeons is documented by a psychologist who demonstrated that pigeons could be trained to execute complex tasks. Ultimately with this information on the web and that to Google site unknown person would be misled. The person thinks it is correct and starts doing research and at the end he is fooled and cheated. So this is a big example for web searching for reliable material which consists of fake material.
                           
When a search is made for Boiler Plate Mechanical Marvel of 19th Century we get a lot of information which says in the 19th Century Boiler Plate a Mechanical Man and First Robot Soldier was developed by Prof. Archibald Camp ion  in Chicago Lab which is Historys great Ironies a Technological milestone which remains largely unknown. Boiler Plate was a prototype soldier who was able to execute proposed functions by participating in several Combat actions. Boiler Plate was a prototype soldier for resolving conflicts of Nations who participated in most of the conflicts of its age either as an observer or as a combatant who disappeared under mysterious circumstances in 1918. This Mechanical Man was an attempt to replace soldiers during an age when men were obsessed with proving themselves in battle. Even there are News paper cuttings, Public opinon, Comic additions, Media Reports, videos etc added in the site which has lot of footage
     
Which provide the information but is it true Was there really a Mechanical man created remains a question. But with the material provided on the web any one cannot come to a conclusion .It could be a character created by someone or may be some illusion. This type of information misleads the society.
                                 
At the time of Historic Killer Tornado which is Natures most violent storm which consists of violently rotating column of air extending from a thunderstorm to the ground which happened in Kansas on 10 June 1938 at that time the information system and Technology was not so advanced that the Media and the people had to rely on the public and the versions people present there at that time had given to gather the information and process it for identifying and preserving the Historical Data or Information. At that time the Scientific Labs were not highly equipped to get the instant and exact information due to which the Scientific Community would knew about the Tornados but the potentially rich source of historic information was not available. What exactly happened no one knew but some documentaries made on this became a subject for careful analysis which was made by Tom Grazulies. His efforts to create database of historical tornadoes brought the differentiation in the causes of death in the Death Reports. Nothing was live at that time there were no images, no footage of the disastrous incident only Analysts analyzed according to the surroundings and the amount and type of damage created by the tornado which is the twister. The movie footage of the Tornadoes before 1950 are rare as the technology and the awareness was not much, and the information providers were very few who deserve the gratitude of Scientists as it helped them to do more Research and come to several conclusions. All this is given in the website but to be frank we dont know it is true or no. It is very critical to come to a conclusion regarding such incidents and the web information which is misguiding.
                         
In present Scenario we have several Business Organizations and solutions which say they provide solutions for the problems faced by the Company or Organization. For Instance we have Corp soft Corporate Solutions a web source which says that it provides the required solutions for the problems faced by an Enterprise or an Organization. It is a web site which says it provides all sorts of key solutions for multiple issues in the organization like 1.Money flow of the organization with financial planning packages. 2.Focus on the customers and employees especially the company Executives time  being channeled into Non productive activities which should  be dealt with, by providing goal oriented corporate solutions .3.Time Management and Awareness being incorporated in the employees to achieve the goals  by not wasting the valuable time etc. All this is provided in the website which may not
     
Be real without providing its address or information .Its is written there to contact us but no address or phone number is provided. The searcher is misguided and it takes time to understand that such a site or an Organization does not exist.An unknown person may think  it is true and start searching and waste his valuable time and energy. Later he comes to know the content present is Fraud .This proves at times the web content is wrong and the Technology is misused and can misguide a person. It proves not to rely on web information blindly as it does not exists.
                     
There is another instance of a website named Golf cross the game played with the oval Gulf ball which says the ball is not round it is oval to play Golf cross and we have goals and not holes and is played  extensively in  New Zealand. The  website is so beautifully created which has U K site ,European site, Danish site, French site, Hungarian site ,Argentinean page when we click on that we go to another page which  consist of Peoples comments and opinions which exists or no only the site creator knows. This site also has required ingredients like History ,Terms ,Rules ,FAQ , Tips ,Media ,News ,Contact Us etc that  any  unknown  person  would  think it to be real. It has so many reviews and public opinions, Media reports, News updates which is all imaginary by the website creator which does not exists. In the site the history of the game from 1986 is give which is present nowhere except in this site which is unimaginable. There are several labels and link buttons created that the user will go to next page and get several details like the game ,the ball, the rules and regulations,sponsers,owners,trade customers and most important the champions who never existed. But Is that True No its a fake website created to fool public and waste their time .When there is a website a person shall be very careful  when  considering the details and  studying the matter present in that and analyze it before coming to any conclusion.
                   
By assessing and analyzing all the above instances we can come to some conclusions regarding the information available in the web. The material and content available is not always correct .It shall be assessed and analyzed before taking it into consideration all the factors like the Address provided the site content-Is it believable or not The mail ids provided, is there any Note provided at the end as it is written for Google Technology webpage as Published on April fools day and for Carp soft Solutions as Bullshit Team etc. So while going through World Wide Webpage or a site the person needs to be very concentrated and devoted to the work he or she is doing so that the person is not cheated upon. Web provides excellent information at the same time it should be accurate and worth which shall be assessed by the person browsing. There are many advantages and disadvantages at the same time. It depends on the searcher to know and decide whether he is getting the correct information   or  no .So  while  doing  a search   any  one shall  consider  all the  factors  and  come  to  conclusions.

The use of New Technologies

Often, it is emphasized, that it is of our advantage to use e-learning because it is not dependent on time and location. However, the minimum requirement of the traditional e-learning is a personal computer. Subsequently, it does not provide an independent location that is complete. The usage of notebooks can still not fulfill these independencies due to the fact that an independency in location that is real will depend on affordability, as well as rapid advancement of the technique that is of necessity. To solve this problem, one can use a new technology, which may be usage of highly mobile and devices that are available, like mobile phones. For instance, the market of Austria is saturated with mobile phones. This is because most university students as well as high school students have mobile phones in their hands all the time. The research question is Will m-learning (mobile learning) be a vital instrument for learning assistance in future.

This research will encompass the usage of qualitative data, like observation data, interviews, and document to clarify and understand the social aspects that are associated with mobile learning. In information technology, the research has shifted from technological concerns to issues that are either organizational or managerial, thus there is an escalating attention in the usage of methods that are qualitative. The probable results in this study are that mobile phones are able to sustain the changes that are in the organization, as a result of the mobile learning engine.  


Learning is a process of social or mental change of the entire lifetime. Nowadays, the structure of learning is constantly changing, particularly in universities and high schools. In this milieu, students are being offered opportunities by new techniques to enable them interact and converse with environments that are simulated as well as resources in multi-medial learning.

Accordingly, as Billet (265) assert, motivation can be enhanced by technology, and this is an important feature of learning, as it is capable of delivering the needed information, and capable of satisfying curiosity as well as encouraging problem solving. Above all, the probability of learners to be scaffold through an extended-process that organizes and captures situated activities.  

Research question The research question in this study is Will m-learning (mobile learning) be a vital instrument for learning assistance in future

Literature review
According to the study by Kurkela (50), the usage of computers in learning is focusing on enhancing learning in settings that are formal, specifically a computer lab, or traditional learning. Moreover, Kurkela (53) argues that learning will not only occur within such learning milieus that are formal. Thus learning capabilities can be expanded by use of mobile devices as problems that are not tied to one location are solved. As Shank (29) puts it, mobile learning is a combination mobile computing and e-learning, and is capable of accessing those applications that will support learning at anyplace.

A study by Shank (37) illustrated that laptops or handheld computers were used by a variety of past examples to mainly support adults in the office. As a result, mobile devices of a corporate were used in mobile learning because they were attractive. Although, despite the fact that hardware has been deemed as usable, capable of solving problem, affordable, and inventive, software is still the utmost challenge. Soloway (287) found that in milieu of project based learning, handhelds ought to be supported, that is, handheld should be used as fundamental part of an activity in learning above all probable feedback and ongoing assessments

Nonetheless, studies by Billet (269) revealed that, small laptops or handhelds lacked availability and were expensive, specifically amid pupils. The merit of mobile phones is that they are highly available.  The penetration of mobile phones in the market of Austria is at a level of 80 percent and this numbers are still escalating. Moreover, Billet (271) emphasizes that in general, the population, specifically the young generation, has been availed with a mobile phone and they usually have it in their hands at all times. Finally, a study by Papert (68) indicated that a vital instrument for all-time learning is mobile learning, which for instance, is the European Unions central objective.

Methodology
This research used qualitative method to move from causal theoretical assumption to data collection and research design. This study used qualitative methods to imply research practices, various skills and postulations with regard to usage of new technologies. In this method, the following were encompassed action research grounded theory case study research and ethnography. Action research was used in this study so as to contribute to practical issues of individuals in an instantaneous problematic circumstance, as well as the social science objectives through joint association within an ethical framework that is mutually acceptable.

A case study was used in this study to describe students in Austria, who are being offered opportunities by new techniques to enable them interact and converse with environments that are simulated. This method was used to examine a phenomenon that was contemporary within the settings of a real life and the boundaries amid phenomena and milieus, which are not evident. This method can be deemed to be critical, positivist, or interpretive depending on the fundamental theoretical assumption a researcher. Ethnography was used whereby ethnographers engrossed themselves the individuals lives in order for them to study management aspects that are linked to new information technology. Moreover, this method was allowed multiple perceptions to be integrated in the design of the system. Grounded theories was used in this research because they were used to develop, process oriented explanations and context base descriptions of the phenomena.

Data Collection
Each of the methodology has its own technique for empirical data collection. The methods will range from observational techniques (like, fieldwork and observation), interviews, all through to archival study. Source of data that are written can encompass newspaper articles, documents, reports, and internet. A case researcher in this study used documentary materials and interviews, with no usage of participant observation. An ethnography researcher in this study spent hisher significant amount of hisher time in the universities and secondary schools in Austria. The data was collected from the field work notes.  

Findings and Analysis
The display size used in mobile phones is relatively small. Furthermore, the bandwidth, and processing power are also small. In the process of testing mobile phones as a new technology that suit learning, their application development had the following restrictions high diversity of operating system limited memory resource and processing power wide range in the possibilities of its inputs and have a variety of screen sizes. From this analysis, we cannot assume that every mobile phone that is standard does not suit an application in m-learning.

Figure 1 illustrates that mobile phones that are available are smart phones, which combine mobile phones and PDAs. For one to understand an application that is platform independent, and which can be applied on various operating systems, it is of necessity to have a development in a standardized environment, such as J2ME (Java 2 Micro Edition). All current phones are java enabled, that is, they can execute a Java 2 Micro Edition application. From the findings, one can say that Java 2 Micro Edition makes it probable for the creation of applications that are web based, because it is platform independent. Nonetheless, applications that are multi-media based can be created through the usage of additional libraries, which differ from manufacturer to manufacture but the fundamental rule is The newer the Smartphone, the more J2ME libraries are supported.

Figure 2 below, illustrates screenshots from the Mobile Learning Engine, a mobile phone application that is based on multi-media. From end-users that were interviewed, their feedback and first experiences are good. Most end-users were capable of handling the Mobile Learning spontaneously, with no instructions. Moreover, majority of end-users were specifically impressed by the aspects of the queries that are interactive and the multimedia in objective learning. The Mobile Engine Learning was developed using the J2ME, thus it is capable of running in a number of programs.

In view of the fact that it is platform independent, it is capable of handling numerous screen resolutions various input possibilities (such as mouse, keypads, or keyboards) and numerous operating systems, like Palm OS, Symbian OS, and Microsoft MS Pocket PC. Features of the Mobile Learning Engine that have been realized will encompass continues text that has been formatted video and audio bars for playback intelligent assistance and questions that are interactive graphical order questions inserting character questions and graphical marking questions within a picture.

Discussion
The shift from classroom teaching, which is pure instructor centered, to didactic milieu, which is constructivist learner centered and away from classroom, can be enhanced by mobile technology that is available. For instance, mobile technology can be enhanced in outside learning settings, like physics in the real world, or in the laboratory, biology in the field, and history in the museum. Accordingly, proper usage of such mobile devices provides massive probabilities for them to be applied in a constructivist learning milieu. Educational and didactical approaches to attain such constructivist milieu will encompass situated learning explorative learning and scaffolding. Even though, the approach that is fundamental is problem solving.

Conclusion
The usage of mobile phones is a probable way of supporting changes in the organization, with respect to learning at universities and high schools, and introduction of the Mobile Learning Engine. The support that is of necessity for changes in organizational learning can be produced by new technologies such as mobile phones. Nevertheless, it can be seen that the pedagogical merits are adequately convincing. Although, the phenomenal expansion of mobile computing will require future research, specifically in the HCI (Human Computer Interaction) and media-psychology, which is on the interfaces that are either responsive or adaptive as well as in the adapted content.

Ethical and Unethical Hacking

Almost all the aspects of life today - education, communication, security, trade - depend on computer technology. However, there is always the fear of adopting this technology as institutions and individuals fear that their private information can be accessed by hackers. The big question therefore is whether hacking is ethical or unethical How can individuals and organizations be sure that their information in the internet is safe or unsafe However, hacking may be ethical or unethical. The difference between ethical and unethical is the intention of the hacker. Unethical hacker aims at developing a program that steals information or destroys other systems while an ethical hacker aims at sealing the loophole in the existing systems to avoid unethical hacking.

Introduction
In the rise of explosive growth in information technology, many changes in the society have resulted in the increased dependence of computers and internet by the world societies. Today, people trade on the internet, students can obtain a wide range of academic materials from the worldwide web and individuals can collaborate in a mission while far thousands of miles away from each other, manufacturers and organizations can make themselves known through the internet. Although the invention of computers has been the greatest development in the modern society, just like any other development, it has a dark side. Hacking and cracking has been the greatest problem facing the use of computer by the modern society. Everyone in the world including government agencies, corporate organizations and individuals are nervous of becoming part of the global village because they fear that criminal hackers can get into their websites servers and change their logos, access their email addresses or credit card numbers or even plant a software that revels their secrets on the web. System hacking has been an issue as communication and information technology continues to develop. It has been the most unethical issue touching on the use of internet and computer technology in general. Since it is not possible to control the vice, promoting ethical hacking to identify and correct security in the systems is the best approach. To any positive thinker, hacking and cracking is unethical since it identifies all loopholes in the security of the system and protect the system from criminal hackers.

Ethical and Unethical Hacking
The basic definition of a hacker is an individual who enjoys expanding his or her knowledge in computer systems. A computer hacker is different from a normal user because he or she endeavors to increase his abilities rather than learning basic computer skills. He is more concerned about the programming rather than learning the programs theoretically. Hacking therefore describes the rapidly increasing art of developing new programs or modifying the existing programs using more complex software. As computer technology improves and computer becomes more available to every one in most parts of the world, the focus in the use of computers in research and engineering has changed as people discover the flexibility of the tool. People have written programs that are used to draw pictures or play games and carry out almost all mundane of the modern life. As computers become more and more popular and essential, their cost skyrocketed which created the need to create an access control. When some of the users were denied access to the computers or to some information in the computers, they took this as a challenge and started developing ways of accessing the classified information or the computer. The only way they could do this was to steal the password of other users by gazing over the shoulder or exploring the system to get a loophole on how to evade some rules to access the system. However, the controlled intrusion of unauthorized individuals into the system did not take long before talented individuals intentionally started developing programs that could destroy the files or crash the system. The introducers also devised ways of accessing some secret information from computers.

As some computer hackers developed ways of accessing classified information from computers, the system developers developed more secure program. However, the more the program advances, the more the hackers develop. They would even take destructive actions whenever discovered and their further access denied. The extent of this problem however became unbearable hence a major story in the news calling for action. The hackers were described as the growing unethical act in the use of technology where the individual broke into a computer or brought down a system for fun, to make a profit or with an intention of revenge.

As technologies advances, the users of these technologies are worried about this unethical act which put their systems at risk. However, the problem associated with unethical hacking can be solved by employing ethical hacking practices. The threat posed by unethical hackers and intruders can be accessed by independent professionals who make attempts of hacking the systems. This can be done in the same way different organizations use independent auditors to confirm whether the bookkeeping practices in the organizations are okay. In this case, the ethical hackers would develop a program that is similar to an unethical intruders program and target the same system. While the unethical intruder will destroy the system or steal the information or both, the ethical hacker will evaluate the vulnerability of the system to intrusion and come up with a solution to that vulnerability. This independent and ethical hacking has been in use for a very long time in analyzing the security of the system. The method was used in analyzing systems in the United States Air Force where the Multics operating system was considered better than other systems in convectional terms although it was also vulnerable to security sabotage.

Before the invention of computer networking and the internet, unethical computer hacking was not a major ethical issue. Information was only considered vulnerable in the military establishment. However, it has become a major issue in the modern computer use because of the unavoidable computer connectivity. It was first revealed by hackers that they can use their techniques to gather any information in the early 1990s. They claimed that they had systems that could be used to compromise the security of information circulated through the internet. However, their program were not created to damage the systems or steal information but to increase the security of information in the internet. For this reason, these hackers claimed to have examples of how the attack could be carried out as well as how it could be prevented.

There are ethics that any ethical hacker should however adhere to otherwise he or she will be considered a computer criminal. An ethical hacker should by trustworthy in all aspects of his life. In the process of evaluating the system of the client for security threat, the hacker may come across some information which is considered top secret. Ethics requires the hacker to treat the information as a secret otherwise the information may get into the wrong people who may be unethical hackers. Publicizing the secrecy of the system may be problematic since unethical hackers will find easer ways of breaking into the system. The clients system is therefore on the hands of the hacker which requires the ethical hacker to be trustworthy with the information obtained about the target systems. The information obtained in the evaluation, the programs used in the assessment as well as the development of counterattack programs is very sensitive and must be treated with the security it deserves. Otherwise, slight dishonesty where the hacker mishandles the information either willingly or unwillingly may be very risky to the target system.

For an ethical hacker to effectively analyze the system as well as propose and develop ways of protecting the system from unethical hacker, the hacker should be equipped with the relevant skills. The hacker should be equipped with computing and programming knowledge, system installation and maintenance and proper knowledge on security of these systems. Having these skills and knowledge is essential in the evaluation of the system for possible threat and aids in making the appropriate report to the client. Other than the skills and knowledge, ethical hackers need to take their time in their evaluations and be persistence. This is centrally to a hacker in an entertainment movie who breaks into a system within no time. The case is also true for unethical hackers who are also known to be very persistent and patient. Both ethical and unethical hackers may take up to several weeks while monitoring the system for possible opportunity for attack. It is not easy to automate the process of evaluating a system for possible attack and is therefore a very tedious job which may time to analyze. The evaluation may require a hacker to work outside the normal working schedule as he seeks to find the loopholes in the system. An ethical or unethical hacker will take more time to investigate the weakness of an unfamiliar or a new system. For this reason, a hacker attempts to keep up with the advancement of computers and security systems.

In my own view, ethical hackers and unethical hackers have similar characteristics. They are both equipped with knowledge and skills on computers and networking. However, the difference is their intentions, though both of them coexist as they try to out do the other. An ethical hacker must be well aware of the knowledge and skills the unethical hacker has in order to evaluate the clients systems for possible attack. On the other hand, if unethical hackers are left behind in terms of knowledge by the ethical hackers, they are no longer significant. Computer hacking is however not a crime like any other. In several occasions, the ethical hacker sometimes acts as an unethical hacker. The task of the ethical hacker in reducing the vulnerability of the system is even harder compared to normal crimes. The ethical hacker will therefore try to evaluate the target system and come up with what a hacker can see in the system, what the hacker will do with the information obtained on the system and whether it will be easy to detect an intruder into the system.

Conclusion
Hacking of systems started in the first days of computer use. Whether hacking the system is ethical or unethical depends on the intentions of the hacker. An ethical hacker will hack the system to evaluate the system for possible attack and make appropriate recommendations while an unethical hacker aims at either bringing down the system or stealing the information on the system. However, the two hackers use the same techniques.

Vannevar Bush

In this article by Bush, he argues that the ever increasing developments in the world of technologies are not a scientists war. They have in fact been forced to do away with the traditional competition in order to respond to the highly dynamic world. He explains how different aspects of scientific approaches have changed in order to fit in the new world order that has largely been shaped by the same scientists. In this article, Bush seeks to know the benefits that have resulted from the application of science and actually gives the reader the freedom of thinking both positively and negatively. He argues that research has brought about a lot of development in several aspects of scientific methods leading to increased benefits on mankind as a result of expanded knowledge base, which he uses to solve emerging challenges.

Bushs conception of the introduction of Memex brought about the thought of individually configurable and easily accessible knowledge storehouse. During the period of WWII, he worked on the profiles of antennae of radar and the computation of firing artillery figures. This mathematics was not only repetitive but also complicated. It is for this reason that he proposed that an analogue computer be developed this in turn became the analyzer of Rockefeller Differential. In this article, he expresses his disappointment when the research he had carried out was eventually rendered outdated in 1950, when digital computers were invented. However, he was pleased by the new innovation since he knew that it had the ability of simplify several things that had been a major stumbling block to great scientist for several years. He therefore, perceives technologies as a series of new inventions giving support to the old ones and rendering them useless at the same time.

The Benefits of Voice over Internet Protocol

Throughout most part of the last century, all the businesses and public communications were relying on general public switched telephone networks but at the brink of new millennium this trend changed. As the Internet became the primary source of global communication, there was a need for an internet based phone service which can be used with ease throughout the world. This solution is not only affordable, easy to use but also provides much better voice clarity as compared to the conventional PSTN based systems.

Previous technologies were based on circuit switching technology whereas VOIP process is based on revolutionary packet switching technique. Its easy installation and affordability has given a whole new meaning to global outsourcing business and is available in many forms of overseas as well as IT business around the world. In packet switching, data is converted into packets and then sent through desired networks with the help of working routers that enable it to reach the destination at a high speed depending on the connection speed.

The ease comes into play as these systems do not use highly elaborate switches but very simple system which is based on the internet routers that provide very accurate and high speed of connectivity.

Description of Technology
How does the VOIP work Well, to understand its working procedure, we will have to understand the basic functionality of the internet procedure through which all the VOIP systems work as it provides the lifeline to the entire system. It works similarly to how the e-mail and website work taking advantage of packet based networking system implementation. There are two types of VOIP based system in use throughout the world today which include

Analog Telephone Adapter

Internet Protocol Phones
These systems provide a variety of functionality based on their usage and necessity. Let us discuss each of them briefly.

Analog Telephone Adapter
This is the easiest and most commonly used method in the world as it allows the connection between a standard phone and a standard computer by means of the internet connection. This system is in use by many businesses as well as home users for ease and affordability purpose. Some other components that are required are a standard PC sound card, a microphone which is integrated into the headphone and a smooth internet connection.

All one has to do is to plug the cable into the ATA socket,  tune the software on PC for establishing appropriate connection and the system is ready to be used. Some companies provide readily available solutions in this segment such as ATT, Call Vantage etc.

Internet Protocol Phones
IP phones appear as normal phones but the difference is the presence of a headset. This type of technology is very helpful for long distance calls and is mostly in use by the outsourcing businesses around the world. Here, software is also needed to be installed on computer and once installed the built-in setup instruction manual will guide the user as to how to use the system effectively and up-to full potential. IP Phones have the capability to connect directly through the router and all the needed apparatus is already integrated into the system. All that is needed is to plug the cable into an empty RJ 45 port. This technology is enhanced by the use of a WiFi phones that allow additional subscribers of WiFi based VOIP calls from any WiFi enabled destination. The easiest and most convenient method to use the VOIP technology is to use it in the realm of computer to computer. The procedure is primarily similar to the ATA one but the use has become even simpler.

Potential Applications of VOIP
With the advancement of internet technology, many organizations have decided to implement this technology in their businesses. Some of those that have already switched to VOIP have opened a sector under the designation of environment friendly site. The purpose is solely to replace the legacy telephone based communication based systems and implement sophisticated VOIP based communication systems that will improve connectivity, reduce cost and will prove cost efficient in the longer run for organization. Due to the importance of business communications and a global expansion of outsourcing businesses such as inbound and outbound call centers, the use of VOIP based technology has created a revolution of effective, affordable and efficient communication that is only growing at a rapid pace as time goes by.

Many large organizations have already implemented VOIP systems for a few years and are now enjoying the ease of use, flexibility and efficiency of this revolutionary technology to such an extent that it seems to be capable enough to replace the legacy telephone based systems throughout the world in the next few years. Despite the advantages it brings, it also has its drawbacks which include its vulnerability against network viruses, worms etc. The main drawback is that it cannot be activated against firewalls that must be turned off before use.

The level of security that can be used is only internal and as VOIP is usually a long distance based network tool, it is bound to be affected by harmful obstacles such as viruses among others. The software is another point of concern sometimes as it may also be prone to errors and as is the case with all new technologies, is usually anomaly prone.

Competing Technologies
For every emerging technology, there is usually a competition that is destined to grab the same market segment with similar or better technology at a similar or less price hence the competition is bound to take place. IT Industry is a crowded market toady and many stakeholders are providing cutting edge solutions for many markets. Be it Intel or AMD, Microsoft or Apple, CISCO or Nortel, it is not an easy road to travel. VOIP is also rivaled by competitors such as CISCO infiniband which is still at an early stage of testing and will take time to mature to be released on the market.

This system seems to hold a lot of promises when coupled with Grid Director based network gateway and provides very high speed and protection to the users, so there is a concern for VOIP as it has traditionally suffered poor maintainability, protection and has had its share of bugs and compatibility issues with different Operating Systems. The infiniband promises to be very flexible in all these domains and if this is the case then we might see an even better technology offering more vibrant solution to businesses around the world as has been the scenario with most CISCO based solutions over the history of the Internet.

They rely on latest CSS 11503 and 11506 comprising new three to six modular slot based architecture and provide speedy networking solution. Once arrived on the market, they will provide tough competition to the currently implemented VOIP technology but in terms of cost, CICSO systems will remain behind as VOIP is simple, easy to use and implement and has a mature architecture that is aided with reliable support. In the coming five years however, we might see a different technology rivaling the VOIP system and to cope up with that, it will have to implement better, faster, and more economical solutions.

The primary focus was on the internal side that had to be done on a slow but constant scale. Not to mention that any change at such huge level had to be implemented while not losing any previous advantages in the process. The implementation will bring with it many benefits to Microsoft as well as to any enterprise only if any IT organization manages it in an appropriate and timely manner.

The changed methodology of data transfer which is in use in the VOIP technology serves many purposes. The first noticeable difference is the data packet based transfer instead of traditional switched based transfer used in previous telecommunication systems. This system provides instant, affordable, and reliable service to its users and does not suffer bottle necks as was observed in the previous system.

One drawback however is the delay of packets during stressful use of the system but this is considered as a minor issue and can be addressed with the help of updated software. Reduced latency is another aspect that has increased speed many fold compared to traditional system. This means that the VOIP system is capable of delivering stream less service during an entire session of service and several tasks can be achieved during the same period of time. It enables data to reach from its initiated source to required destination in a matter of few seconds which was impossible to achieve with traditional telecommunications.

Since the entire system is internet based, it suffers internet vulnerabilities when server is down or experiencing problems. This is the only trouble that VOIP based system suffers as it totally relies on the internet for functionality. The better the internet service, the faster the performance. Another weakness is the systems in built safety feature and its lack of support on OS based antivirus and anti spyware software. If a firewall, antivirus or spy ware is activated on a server, VOIP switch will struggle to establish an efficient connection to the required destination.

To address this issue, VOIP manufacturers are looking to implement an effective solution in coming years to provide an All-in-one solution to their global customers.  Because there could be no one as foolish as this person or company who is although aware about the nature of the Internet, would willingly or unwillingly open themselves up for the inevitable massive abuse that is about to rain down on them via their provided telephone or fax number.

Despite its efficiency and ease of use, there is one system which is competing with VOIP technology and has not become obsolete by any means.  The system is E-mail or electronic mail which is in use by all global business organizations, consultancies, and many other areas. It is understandable just how annoying a junk mail can be for users but it is not a wise thing to call those numbers only to abuse or yell, instead just avoid it. Similar is the matter with faxes as there is no point in sending multiple pages of faxes only to warn or harass because it is not a good idea to call such a number of times and again just to run up their bill because not only it is a crime to harass anyone on the phone but also your number will be included in their phone bill.

There is a possibility that the owner of that number either himself became a victim of a junk mailer himself or a senseless person who was new to the internet marketing and committed such a horrid mistake to provoke internet users.  Staniford (2006) conducted a survey in which he talked to some of these people by phone and not surprisingly most of them were usually very remorseful and regretful, fielding angry phone calls all day.

For expert junk emailers it is an incredibly easy thing to counterfeit an email address and all they have to do is to type in a bogus from address in their mail client. It is not such a difficult thing to do. All one has to do is to change the return address in the email software to nobodykrskjne.com and then send a message at ones actual email address and it will work. There is nothing in this matter that the Internet Service provider can do as it is simply unable to help you in your junk email reduction unless the body of the spam specifically requests replies to that address for more information.

Most of the junk emailers realize that they can be tracked through the received lines in the headers. This is why, most of the time they attempt to conceal the headers in order to confuse matters.

Although these received headers can be counterfeit as well, still it is a somewhat more difficult case than to simply counterfeit the returning address.  Also one must remember that most of the incoming emails that include the junk emails have a total of just two received lines in the headers. The first one is generated by the ISPs incoming mail machine while the other one is generated through the outgoing server which indicates the originating IP. This should be enough to caution any internet user when heshe receives any such mail as most of these junk mails have an additional header.

Apart of the obvious, in the realm of communication, some technologies that are here to stay are likely to give a stiff competition to the VOIP and upcoming communication systems. It might sound far fetched, but the current e-mail based systems are the biggest rivals of VOIP technology as most organizations still prefer to communicate by means of free e-mails that are cheap and the easiest way of communications in the world so far.

One such company is the DMA that provides an ease to their customers by not charging directly on the credit card.This is obvious as giving ones credit card number makes many people easily offended. The truth of the matter is that if they ask for your credit card number, there is probably intent to discourage people from registering and therefore the junk mail process continues to bother the masses. Whatever the case maybe, the fact is that the service is purely genuine and by getting registered, it does reduce your junk mail by many fold. There is a drawback however that it works only in one dimension i.e. for national mail and not the local mail and also does so for only residential addresses and not the business ones, so one can say that the job is merely half done.

Let us look at this situation from the junk mail generator i.e. the actual industrialists perspective as to discover just why they do what they do Any industry where almost ninety eight percent of the marketing fails to reach its targeted audience is an enormous business opportunity for anyone able to sort out just as to how to hit the target more often than not. If we as users eliminate all the unwanted ads from our mailboxes, it will increase profits for mailers because they will slash their advertising cost without any diminishing revenue.

According to Kremer, what other reason one can imagine as to why all the U.S. merchants spend a staggering amount of 30 billion per year only to distribute these junk mails and even though they realize that less than half of this mail will never be opened. To accurately target their direct mail associates is the interest of the economy with other factors such as forests, solid waste agencies, climate and also the postal customers.

Be careful and try not to fill out any forms whatsoever as in that case your address will surely be tracked. Every time you fill out any online form of any sort, never forget to repeat the same words in order to avoid using any junk mail and advertising spam. The result will be as one should expect because when they will see that you are not interested, they will remove your name from their destined customer list. This very phenomenon is repeatedly observed in many outsourcing companies where the VOIP systems are in use, the example of level of effectiveness and reliability of VOIP system is observed as these outsourcing companies operate from various parts of the world to reduce business operations cost. To achieve this, they hire employees where cheap labor is available such as India, China and Pakistan. The point is that connectivity is not an issue as long as there is seamless Internet Connection available.

Conclusion
Now we can conclude as we have analyzed the advantages and disadvantages of VOIP system, its possible competition in years to come and legacy systems that are still proving its worth to some extent. The VOIP is definitely here to stay and it is doing just that. The efficiency, user friendliness, easy to use and good speed are some of the advantages it provides.

We have seen the proof of an efficient simultaneous working environment between both the components confirming that and it can be assumed from the above mentioned experiment that in order to attain optimum efficiency, it would be wise to increase the number of processors so that they are at least equal or a little simpler because in heavy work load environments, it is deemed as a necessity and not a mere formality.

The drawbacks are its protection from the internet based threats as it is destined to work on a wide domain where it is vulnerable against network threats. If these issues are addressed in due course than it will remain in the business. Still in a year or so, it will face a stiff competition from infiniband and others. As a result of it and in order to remain competitive in the market they have to take care of some issues.