Honeypot

          The Internet is growing fast and doubling its number of websites every 53 days and the number of people using the internet is also growing. Hence, global communication is getting more important every day. At the same time, computer crimes are also increasing. Countermeasures are developed to detect or prevent attacks - most of these measures are based on known facts, known attack patterns. Countermeasures such as firewalls and network intrusion detection systems are based on prevention, detection and reaction mechanism; but is there enough information about the enemy?

          As in the military, it is important to know, who the enemy is, what kind of strategy he uses, what tools he utilizes and what he is aiming for. Gathering this kind of information is not easy but important. By knowing attack strategies, countermeasure scan be improved and vulnerabilities can be fixed. To gather as much information as possible is one main goal of a honeypot. Generally, such information gathering should be done silently, without alarming an attacker. All the gathered information leads to an advantage on the defending side and can therefore be used on productive systems to prevent attacks.

          A honeypot is primarily an instrument for information gathering and learning. Its primary purpose is not to be an ambush for the blackhat community to catch them in action and to press charges against them. The focus lies on a silent collection of as much information as possible about their attack patterns, used programs, purpose of attack and the blackhat community itself. All this information is used to learn more about the blackhat proceedings and motives, as well as their technical knowledge and abilities. This is just a primary purpose of a honeypot. There are a lot of other possibilities for a honeypot - divert hackers from productive systems or catch a hacker while conducting an attack are just two possible examples. They are not the perfect solution for solving or preventing computer crimes.

          Honeypots are hard to maintain and they need operators with good knowledge about operating systems and network security. In the right hands, a honeypot can be an effective tool for information gathering. In the wrong, unexperienced hands, a honeypot can become another infiltrated machine and an instrument for the blackhat community.



Blue Gene Technology

A Blue Gene/P supercomputer at Argonne National Laboratory
          In November 2001 IBM announced a partnership with Lawrence Livermore National Laboratory to build the Blue Gene/L (BG/L) supercomputer, a 65,536-node machine designed around embedded PowerPC processors. Through the use of system-on-a-chip integration coupled with a highly scalable cellular architecture, Blue Gene/L will deliver 180 or 360 Teraflops of peak computing power, depending on the utilization mode.

The block scheme of the
Blue Gene/LASIC including
dual PowerPC 440 cores.
          Blue Gene/L represents a new level of scalability for parallel systems. Whereas existing large scale systems range in size from hundreds to a few of compute nodes, Blue Gene/L makes a jump of almost two orders of magnitude. Several techniques have been proposed for building such a powerful machine. Some of the designs call for extremely powerful (100 GFLOPS) processors based on superconducting technology. The class of designs that we focus on use current and foreseeable CMOS technology. It is reasonably clear that such machines, in the near future at least, will require a departure from the architectures of the current parallel supercomputers, which use few thousand commodity microprocessors. With the current technology, it would take around a million microprocessors to achieve a petaFLOPS performance.

          Clearly, power requirements and cost considerations alone preclude this option. The class of machines of interest to us use a “processorsin- memory” design: the basic building block is a single chip that includes multiple processors as well as memory and interconnection routing logic. On such machines, the ratio of memory-to-processors will be substantially lower than the prevalent one. As the technology is assumed to be the current generation one, the number of processors will still have to be close to a million, but the number of chips will be much lower. Using such a design, petaFLOPS performance will be reached within the next 2-3 years, especially since IBM hasannounced the Blue Gene project aimed at building such a machine.


One Blue Gene/L node board

          The system software for Blue Gene/L is a combination of standard and custom solutions. The software architecture for the machine is divided into three functional Entities arranged hierarchically: a computational core, a control infrastructure and a service infrastructure. The I/O nodes (part of the control infrastructure) execute a version of the Linux kernel and are the primary off-load engine for most system services. No user code directly executes on the I/O nodes.


Optical Computers

          Computers have become an indispensable part of life. We need computers everywhere, be it for work, research or in any such field. As the use of computers in our day-to-day life increases, the computing resources that we need also go up. For companies like Google and Microsoft, harnessing the resources as and when they need it is not a problem. But when it comes to smaller enterprises, affordability becomes a huge factor. With the huge infrastructure come problems like machines failure, hard drive crashes, software bugs, etc. This might be a big headache for such a community. Optical Computing offers a solution to this situation.

          An Optical Computer is a hypothetical device that uses visible light or infrared beams, rather than electric current, to perform digital computations. An electric current flows at only about 10 percent of speed of light. By applying some of the advantages of visible and/or IR networks at the device and component scale, a computer can be developed that can perform operations very much times faster than a conventional electronic computer.


          Optical computing describes a new technological approach for constructing computerâ„¢s processors and other components. Instead of the current approach of electrically transmitting data along tiny wires etched onto silicon. Optical computing employs a technology called silicon photonics that uses laser light instead. This use of optical lasers overcomes the constraintsassociated with heat dissipation in todayâ„¢s components and allows much more information to be stored and transmitted in the same amount of space.

          Optical computing means performing computations, operations, storage and transmission of data using light. Optical technology promises massive upgrades in the efficiency and speed of computers, as well as significant shrinkage in their size and cost. An optical desktop computer is capable of processing data up to 1,00,000 times faster than current models.


Surface Computing


          The name Surface comes from "surface computing," and Microsoft envisions the coffee-table machine as the first of many such devices. Surface computing uses a blend of wireless protocols, special machine-readable tags and shape recognition to seamlessly merge the real and the virtual world — an idea the Milan team refers to as "blended reality." The table can be built with a variety of wireless transceivers, including Bluetooth, Wi-Fi and (eventually) radio frequency identification (RFID) and is designed to sync instantly with any device that touches its surface.

          It supports multiple touch points – Microsoft says "dozens and dozens" -- as well as multiple users simultaneously, so more than one person could be using it at once, or one person could be doing multiple tasks.


          The term "surface" describes how it's used. There is no keyboard or mouse. All interactions with the computer are done via touching the surface of the computer's screen with hands or brushes, or via wireless interaction with devices such as smartphones, digital cameras or Microsoft's Zune music player. Because of the cameras, the device can also recognize physical objects; for instance credit cards or hotel "loyalty" cards.

          For instance, a user could set a digital camera down on the tabletop and wirelessly transfer pictures into folders on Surface's hard drive. Or setting a music player down would let a user drag songs from his or her home music collection directly into the player, or between two players, using a finger – or transfer mapping information for the location of a restaurant where you just made reservations through a Surface tabletop over to a smartphone just before you walk out the door.


Fog Screen

          Fog Screen is breakthrough technology that allows projection of high quality images in the air. It is currently the only walk-through projection screen. You can literally use the air as your user interface by touching only the air with your bare hands. The screen is created by using a suspended fog generating device with no frame around, and works with video projectors. The fog they use is dry, so it doesn’t make you wet even if you stay under the Fog Screen device for a long time. The fog is made of ordinary water with no chemicals what soever. With two projectors, you can project different images on both sides of the screen. It is a display device which is the application of computer graphics.

  • Inspired by science fiction movies such as Star Wars, two Finnish virtual reality researchers created the Fog Screen to recreate some of the effects from these movies in real life.
  • Fog Screen is an exciting new projection technology that allows to project images and video onto a screen of “dry” fog, creating the illusion that the images are floating in midair
  • Fog Screen is the world’s first immaterial walk-through projection screen. Its Qualities, in particular the walk-through capability, set Fog Screen apart from other displays and thus created a seemingly successful market for its products.
  • The Fog Screen is an innovative display technology that allows for projections on a thin layer of dry fog. Imagine the traditional pull down screen that is found in many classrooms today. Instead of a screen being pulled down from the ceiling, fog is pushed down and held in place by several small fans, allowing for a consistent surface for display.



Virtual keyboard - VKB

          A virtual keyboard is actually a key-in device, roughly a size of a fountain pen, which uses highly advanced laser technology, to project a full sized keyboard on to a flat surface. Since the invention of computers they had undergone rapid miniaturization. Disks and components grew smaller in size, but only component remained same for decades -its keyboard. Since miniaturization of a traditional keyboard is very difficult we go for virtual keyboard. Here, a camera tracks the finger movements of the typist to get the correct keystroke. A virtual keyboard is a keyboard that a user operates by typing on or within a wireless or optical -dectable surface or area rather than by depressing physical keys.


          Since their invention, computers have undergone rapid miniaturization from being a 'space saver' to 'as tiny as your palm'. Disks and components grew smaller in size, but one component still remained the same for decades - it's the keyboard. Miniaturization of keyboard had proved nightmare for users. Users of PDAs and smart phones are annoyed by the tiny size of the keys. The new innovation Virtual Keyboard uses advanced technologies to project a full-sized computing key-board to any surface. This device has become the solution for mobile computer users who prefer to do touch-typing than cramping over tiny keys. Typing information into mobile devices usually feels about as natural as a linebacker riding a Big Wheel. Virtual Keyboard is a way to eliminate finger cramping. All that's needed to use the keyboard is a flat surface. Using laser technology, a bright red image of a keyboard is projected from a device such as a handheld. Detection technology based on optical recognition allows users to tap the images of the keys so the virtual keyboard behaves like a real one. It's designed to support any typing speed.


          A virtual keyboard is a keyboard that a user operates by typing (moving fingers) on or within a wireless or optical-detectable surface or area rather than by depressing physical keys. In one technology, the keyboard is projected optically on a flat surface and, as the user touches the image of a key, the optical device detects the stroke and sends it to the computer. In another technology, the keyboard is projected on an area and selected keys are transmitted as wireless signals using the short-range Bluetooth technology. With either approach, a virtual keyboard makes it possible for the user of a very small smart phone or a wearable computer to have full keyboard capability.

          Theoretically, with either approach, the keyboard can be in space and the user can type by moving fingers through the air! The regular QWERTY keyboard layout is provided. All that's needed to use the keyboard is a flat surface. Using laser technology, a bright red image of a keyboard is projected from a device such as a handheld. Detection technology based on optical recognition allows users to tap the images of the keys so the virtual keyboard behaves like a real one. It's designed to support any typing speed. Several products have been developed that use virtual keyboard to mean a keyboard that has been put on a display screen as an image map. In some cases, the keyboard can be customized. Depending on the product, the user (who may be someone unable to use a regular keyboard) can use a touch screen or a mouse to select the keys.


Hacking

          The Internet, like any other new media historically, provides new methods of engaging in illegal activities. That is not to say that the Internet is intrinsically 'bad', as many tabloid journalists would have us to believe, it is simply a means for human beings to express themselves and share common interests. Unfortunately, many of these common interests include pornography, trading Warez (pirated software), trading illegal MP3 files, and engaging in all kinds of fraud such as credit card fraud. Hacking on the other hand is a greatly misrepresented activity as portrayed by the wider media and Hollywood movies. Although many hackers go on from being computer enthusiasts to Warez pirates, many also become system administrators, security consultants or website managers.

  • Hacking generally refers to the act of a person abusing computer access, breaking into computers, or using computers without authorization.
  • An Attack is the attempt of an individual or group to violate a system through some series of events. The attack can originate from someone inside or outside the network.
  • An Intruder or Attacker is a person who carries out an attack.
       
          Hacker is a term used to describe different types of computer experts. It is also sometimes extended to mean any kind of expert, especially with the connotation of having particularly detailed knowledge or of cleverly circumventing limits. The meaning of the term, when used in a computer context, has changed somewhat over the decades since it first came into use, as it has been given additional and clashing meanings by new users of the word.

          Currently, "hacker" is used in two main ways, one positive and one pejorative. It can be used in the computing community to describe a particularly brilliant programmer or technical expert (). This is said by some to be the "correct" usage of the word (see the Jargon File definition below). In popular usage and in the media, however, it generally describes computer intruders or criminals. "Hacker" can be seen as a shibboleth, identifying those who use it in its positive sense as members of the computing community.


3D Searching


          The 3D-search system uses algorithms to convert the selected or drawn image-based query into amathematical model that describes the features of the object being sought. This converts drawings andobjects into a form that computers can work with. The search system then compares the mathematicaldescription of the drawn or selected object to those of 3D objects stored in a database, looking forsimilarities in the described features.
          The key to the way computer programs look for 3D objects is the voxel (volume pixel). A voxel is a set of graphical data-such as position, color, and density-that defines the smallest cubeshaped building blockof a 3D image. Computers can display 3D images only in two dimensions. To do this, 3D renderingsoftware takes an object and slices it into 2D cross sections. The cross sections consist of pixels (pictureelements), which are single points in a 2D image. To render the 3D image on a 2D screen, the computerdetermines how to display the 2D cross sections stacked on top of each other, using the applicableinterpixel and interslice distances to position them properly. The computer interpolates data to fill ininterslice gaps and create a solid image.
          True 3D search systems offer two principal ways to formulate a query: Users can select objects from acatalog of images based on product groupings, such as gears or sofas; or they can utilize a drawingprogram to create a picture of the object they are looking for. or example, Princeton's 3D search engineuses an application to let users draw a 2D or 3D representation of the object they want to find.


CAPTCHA

          You're trying to sign up for a free email service offered by Gmail or Yahoo. Before you can submit your application, you first have to pass a test. It's not a hard test -- in fact, that's the point. For you, the test should be simple and straightforward. But for a computer, the test should be almost impossible to solve.

          This sort of test is a CAPTCHA. They're also known as a type of Human Interaction Proof (HIP). You've probably seen CAPTCHA tests on lots of Web sites. The most common form of CAPTCHA is an image of several distorted letters. It's your job to type the correct series of letters into a form. If your letters match the ones in the distorted image, you pass the test.

          

          CAPTCHAs are short for Completely Automated Public Turing test to tell Computers and Humans Apart. The term "CAPTCHA" was coined in 2000 by Luis Von Ahn, Manuel Blum, Nicholas J. Hopper (all of Carnegie Mellon University, and John Langford (then of IBM). They are challenge-response tests to ensure that the users are indeed human. The purpose of a CAPTCHA is to block form submissions from spam bots – automated scripts that harvest email addresses from publicly available web forms. A common kind of CAPTCHA used on most websites requires the users to enter the string of characters that appear in a distorted form on the screen.

          
          CAPTCHAs are used because of the fact that it is difficult for the computers to extract the text from such a distorted image, whereas it is relatively easy for a human to understand the text hidden behind the distortions. Therefore, the correct response to a CAPTCHA challenge is assumed to come from a human and the user is permitted into the website.


Neural Networks & Their Application

          Neural networks have seen an explosion of interest over the last few years and are being successfully applied across an extraordinary range of problem domains, in areas as diverse as finance, medicine, engineering, geology, physics and biology. The excitement stems from the fact that these networks are attempts to model the capabilities of the human brain. From a statistical perspective neural networks are interesting because of their potential use inprediction and classification problems.

          Artificial neural networks (ANNs) are non-linear data driven self adaptive approach as opposed to the traditional model based methods. They are powerful tools for modelling, especially when the underlying data relationship is unknown. ANNs can identify and learn correlated patterns between input data sets and corresponding target values. After training,ANNs can be used to predict the outcome of new independent input data. ANNs imitate the learning process of the human brain and can process problems involving non-linear and complex data even if the data are imprecise and noisy. Thus they are ideally suited for the modeling of agricultural data which are known to be complex and often non-linear.

          These networks are “neural” in the sense that they may have been inspired by neuroscience but not necessarily because they are faithful models of biological neural or cognitive phenomena. In fact majority of the network are more closely related to traditional mathematical and/or statistical models such as non-parametric pattern classifiers, clustering algorithms, nonlinear filters, and statistical regression models than they are to neurobiology models.


          Neural networks (NNs) have been used for a wide variety of applications where statistical methods are traditionally employed. They have been used in classification problems, such as identifying underwater sonar currents, recognizing speech, and predicting the secondary structure of globular proteins. In time-series applications, NNs have been used in predicting stock market performance. As statisticians or users of statistics, these problems are normally solved through classical statistical methods, such as discriminant analysis, logistic regression, Bayes analysis, multiple regression, and ARIMA time-series models. It is, therefore, time to recognize neural networks as a powerful tool for data analysis.



4G Wireless Systems

          The approaching 4G (fourth generation) mobile communication systems are projected to solve still-remaining problems of 3G (third generation) systems and to provide a wide variety of new services, from high-quality voice to high-definition video to high-data-rate wireless channels. The term 4G is used broadly to include several types of broadband wireless access communication systems, not only cellular telephone systems.One of the terms used to describe 4G is MAGIC”Mobile multimedia, anytime anywhere, Global mobility support, integrated wireless solution, and customized personal service. As a promise for the future, 4G systems, that is, cellular broadband wireless access systems have been attracting much interest in the mobile communication arena. The 4G systems not only will support the next generation of mobile service, but also will support the fixed wireless networks. This paper presents an overall vision of the 4G features, framework, and integration of mobile communication.

          The features of 4G systems might be summarized with one word”integration. The 4G systems are about seamlessly integrating terminals, networks, and applications to satisfy increasing user demands. The continuous expansion of mobile communication and wireless networks shows evidence of exceptional growth in the areas of mobile subscriber, wireless network access, mobile services, and applications. Consumers demand more from their technology. Whether it be a television, cellular phone, or refrigerator, the latest technology purchase must have new features. With the advent of the Internet, the most-wanted feature is better, faster access to information. Cellular subscribers pay extra on top of their basic bills for such features as instant messaging, stock quotes, and even Internet access right on their phones. But that is far from the limit of features; manufacturers entice customers to buy new phones with photo and even video capability. It is no longer a quantum leap to envision a time when access to all necessary information the power of a personal computer , sits in the palm of oneâ„¢s hand. To support such a powerful system, we need pervasive, high-speed wireless connectivity.


          A number of technologies currently exist to provide users with high-speed digital wireless connectivity; Bluetooth and 802.11 are examples. These two standards provide very high speed network connections over short distances, typically in the tens of meters. Meanwhile, cellular providers seek to increase speed on their long-range wireless networks. The goal is the same: long-range, high-speed wireless, which for the purposes of this report will be called 4G, for fourth-generation wireless system. Such a system does not yet exist, nor will it exist in todayâ„¢s market without standardization. Fourth-generation wireless needs to be standardized throughout the world due to its enticing advantages to both users and providers.


Artificial Brain

          Artificial brain is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to the animal or human brain. Research investigating "artificial brains" plays three important roles in science:

  1. An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.
  2. A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.
  3. A serious long term project to create machines capable of general intelligent action or Artificial General Intelligence. This idea has been popularised by Ray Kurzweil as strong AI (taken to mean a machine as intelligent as a human being).

          An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create "neurospheres" (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer's, Motor Neurone and Parkinson's Disease.

          The second objective is a reply to arguments such as John Searle's Chinese room argumentHubert Dreyfuscritique of AI or Roger Penrose's argument in The Emperor's New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper "Computing Machinery and Intelligence".

          The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses onwhole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.