Honeypot

          The Internet is growing fast and doubling its number of websites every 53 days and the number of people using the internet is also growing. Hence, global communication is getting more important every day. At the same time, computer crimes are also increasing. Countermeasures are developed to detect or prevent attacks - most of these measures are based on known facts, known attack patterns. Countermeasures such as firewalls and network intrusion detection systems are based on prevention, detection and reaction mechanism; but is there enough information about the enemy?

          As in the military, it is important to know, who the enemy is, what kind of strategy he uses, what tools he utilizes and what he is aiming for. Gathering this kind of information is not easy but important. By knowing attack strategies, countermeasure scan be improved and vulnerabilities can be fixed. To gather as much information as possible is one main goal of a honeypot. Generally, such information gathering should be done silently, without alarming an attacker. All the gathered information leads to an advantage on the defending side and can therefore be used on productive systems to prevent attacks.

          A honeypot is primarily an instrument for information gathering and learning. Its primary purpose is not to be an ambush for the blackhat community to catch them in action and to press charges against them. The focus lies on a silent collection of as much information as possible about their attack patterns, used programs, purpose of attack and the blackhat community itself. All this information is used to learn more about the blackhat proceedings and motives, as well as their technical knowledge and abilities. This is just a primary purpose of a honeypot. There are a lot of other possibilities for a honeypot - divert hackers from productive systems or catch a hacker while conducting an attack are just two possible examples. They are not the perfect solution for solving or preventing computer crimes.

          Honeypots are hard to maintain and they need operators with good knowledge about operating systems and network security. In the right hands, a honeypot can be an effective tool for information gathering. In the wrong, unexperienced hands, a honeypot can become another infiltrated machine and an instrument for the blackhat community.



Blue Gene Technology

A Blue Gene/P supercomputer at Argonne National Laboratory
          In November 2001 IBM announced a partnership with Lawrence Livermore National Laboratory to build the Blue Gene/L (BG/L) supercomputer, a 65,536-node machine designed around embedded PowerPC processors. Through the use of system-on-a-chip integration coupled with a highly scalable cellular architecture, Blue Gene/L will deliver 180 or 360 Teraflops of peak computing power, depending on the utilization mode.

The block scheme of the
Blue Gene/LASIC including
dual PowerPC 440 cores.
          Blue Gene/L represents a new level of scalability for parallel systems. Whereas existing large scale systems range in size from hundreds to a few of compute nodes, Blue Gene/L makes a jump of almost two orders of magnitude. Several techniques have been proposed for building such a powerful machine. Some of the designs call for extremely powerful (100 GFLOPS) processors based on superconducting technology. The class of designs that we focus on use current and foreseeable CMOS technology. It is reasonably clear that such machines, in the near future at least, will require a departure from the architectures of the current parallel supercomputers, which use few thousand commodity microprocessors. With the current technology, it would take around a million microprocessors to achieve a petaFLOPS performance.

          Clearly, power requirements and cost considerations alone preclude this option. The class of machines of interest to us use a “processorsin- memory” design: the basic building block is a single chip that includes multiple processors as well as memory and interconnection routing logic. On such machines, the ratio of memory-to-processors will be substantially lower than the prevalent one. As the technology is assumed to be the current generation one, the number of processors will still have to be close to a million, but the number of chips will be much lower. Using such a design, petaFLOPS performance will be reached within the next 2-3 years, especially since IBM hasannounced the Blue Gene project aimed at building such a machine.


One Blue Gene/L node board

          The system software for Blue Gene/L is a combination of standard and custom solutions. The software architecture for the machine is divided into three functional Entities arranged hierarchically: a computational core, a control infrastructure and a service infrastructure. The I/O nodes (part of the control infrastructure) execute a version of the Linux kernel and are the primary off-load engine for most system services. No user code directly executes on the I/O nodes.


Optical Computers

          Computers have become an indispensable part of life. We need computers everywhere, be it for work, research or in any such field. As the use of computers in our day-to-day life increases, the computing resources that we need also go up. For companies like Google and Microsoft, harnessing the resources as and when they need it is not a problem. But when it comes to smaller enterprises, affordability becomes a huge factor. With the huge infrastructure come problems like machines failure, hard drive crashes, software bugs, etc. This might be a big headache for such a community. Optical Computing offers a solution to this situation.

          An Optical Computer is a hypothetical device that uses visible light or infrared beams, rather than electric current, to perform digital computations. An electric current flows at only about 10 percent of speed of light. By applying some of the advantages of visible and/or IR networks at the device and component scale, a computer can be developed that can perform operations very much times faster than a conventional electronic computer.


          Optical computing describes a new technological approach for constructing computerĆ¢„¢s processors and other components. Instead of the current approach of electrically transmitting data along tiny wires etched onto silicon. Optical computing employs a technology called silicon photonics that uses laser light instead. This use of optical lasers overcomes the constraintsassociated with heat dissipation in todayĆ¢„¢s components and allows much more information to be stored and transmitted in the same amount of space.

          Optical computing means performing computations, operations, storage and transmission of data using light. Optical technology promises massive upgrades in the efficiency and speed of computers, as well as significant shrinkage in their size and cost. An optical desktop computer is capable of processing data up to 1,00,000 times faster than current models.


Surface Computing


          The name Surface comes from "surface computing," and Microsoft envisions the coffee-table machine as the first of many such devices. Surface computing uses a blend of wireless protocols, special machine-readable tags and shape recognition to seamlessly merge the real and the virtual world — an idea the Milan team refers to as "blended reality." The table can be built with a variety of wireless transceivers, including Bluetooth, Wi-Fi and (eventually) radio frequency identification (RFID) and is designed to sync instantly with any device that touches its surface.

          It supports multiple touch points – Microsoft says "dozens and dozens" -- as well as multiple users simultaneously, so more than one person could be using it at once, or one person could be doing multiple tasks.


          The term "surface" describes how it's used. There is no keyboard or mouse. All interactions with the computer are done via touching the surface of the computer's screen with hands or brushes, or via wireless interaction with devices such as smartphones, digital cameras or Microsoft's Zune music player. Because of the cameras, the device can also recognize physical objects; for instance credit cards or hotel "loyalty" cards.

          For instance, a user could set a digital camera down on the tabletop and wirelessly transfer pictures into folders on Surface's hard drive. Or setting a music player down would let a user drag songs from his or her home music collection directly into the player, or between two players, using a finger – or transfer mapping information for the location of a restaurant where you just made reservations through a Surface tabletop over to a smartphone just before you walk out the door.


Fog Screen

          Fog Screen is breakthrough technology that allows projection of high quality images in the air. It is currently the only walk-through projection screen. You can literally use the air as your user interface by touching only the air with your bare hands. The screen is created by using a suspended fog generating device with no frame around, and works with video projectors. The fog they use is dry, so it doesn’t make you wet even if you stay under the Fog Screen device for a long time. The fog is made of ordinary water with no chemicals what soever. With two projectors, you can project different images on both sides of the screen. It is a display device which is the application of computer graphics.

  • Inspired by science fiction movies such as Star Wars, two Finnish virtual reality researchers created the Fog Screen to recreate some of the effects from these movies in real life.
  • Fog Screen is an exciting new projection technology that allows to project images and video onto a screen of “dry” fog, creating the illusion that the images are floating in midair
  • Fog Screen is the world’s first immaterial walk-through projection screen. Its Qualities, in particular the walk-through capability, set Fog Screen apart from other displays and thus created a seemingly successful market for its products.
  • The Fog Screen is an innovative display technology that allows for projections on a thin layer of dry fog. Imagine the traditional pull down screen that is found in many classrooms today. Instead of a screen being pulled down from the ceiling, fog is pushed down and held in place by several small fans, allowing for a consistent surface for display.



Virtual keyboard - VKB

          A virtual keyboard is actually a key-in device, roughly a size of a fountain pen, which uses highly advanced laser technology, to project a full sized keyboard on to a flat surface. Since the invention of computers they had undergone rapid miniaturization. Disks and components grew smaller in size, but only component remained same for decades -its keyboard. Since miniaturization of a traditional keyboard is very difficult we go for virtual keyboard. Here, a camera tracks the finger movements of the typist to get the correct keystroke. A virtual keyboard is a keyboard that a user operates by typing on or within a wireless or optical -dectable surface or area rather than by depressing physical keys.


          Since their invention, computers have undergone rapid miniaturization from being a 'space saver' to 'as tiny as your palm'. Disks and components grew smaller in size, but one component still remained the same for decades - it's the keyboard. Miniaturization of keyboard had proved nightmare for users. Users of PDAs and smart phones are annoyed by the tiny size of the keys. The new innovation Virtual Keyboard uses advanced technologies to project a full-sized computing key-board to any surface. This device has become the solution for mobile computer users who prefer to do touch-typing than cramping over tiny keys. Typing information into mobile devices usually feels about as natural as a linebacker riding a Big Wheel. Virtual Keyboard is a way to eliminate finger cramping. All that's needed to use the keyboard is a flat surface. Using laser technology, a bright red image of a keyboard is projected from a device such as a handheld. Detection technology based on optical recognition allows users to tap the images of the keys so the virtual keyboard behaves like a real one. It's designed to support any typing speed.


          A virtual keyboard is a keyboard that a user operates by typing (moving fingers) on or within a wireless or optical-detectable surface or area rather than by depressing physical keys. In one technology, the keyboard is projected optically on a flat surface and, as the user touches the image of a key, the optical device detects the stroke and sends it to the computer. In another technology, the keyboard is projected on an area and selected keys are transmitted as wireless signals using the short-range Bluetooth technology. With either approach, a virtual keyboard makes it possible for the user of a very small smart phone or a wearable computer to have full keyboard capability.

          Theoretically, with either approach, the keyboard can be in space and the user can type by moving fingers through the air! The regular QWERTY keyboard layout is provided. All that's needed to use the keyboard is a flat surface. Using laser technology, a bright red image of a keyboard is projected from a device such as a handheld. Detection technology based on optical recognition allows users to tap the images of the keys so the virtual keyboard behaves like a real one. It's designed to support any typing speed. Several products have been developed that use virtual keyboard to mean a keyboard that has been put on a display screen as an image map. In some cases, the keyboard can be customized. Depending on the product, the user (who may be someone unable to use a regular keyboard) can use a touch screen or a mouse to select the keys.


Hacking

          The Internet, like any other new media historically, provides new methods of engaging in illegal activities. That is not to say that the Internet is intrinsically 'bad', as many tabloid journalists would have us to believe, it is simply a means for human beings to express themselves and share common interests. Unfortunately, many of these common interests include pornography, trading Warez (pirated software), trading illegal MP3 files, and engaging in all kinds of fraud such as credit card fraud. Hacking on the other hand is a greatly misrepresented activity as portrayed by the wider media and Hollywood movies. Although many hackers go on from being computer enthusiasts to Warez pirates, many also become system administrators, security consultants or website managers.

  • Hacking generally refers to the act of a person abusing computer access, breaking into computers, or using computers without authorization.
  • An Attack is the attempt of an individual or group to violate a system through some series of events. The attack can originate from someone inside or outside the network.
  • An Intruder or Attacker is a person who carries out an attack.
       
          Hacker is a term used to describe different types of computer experts. It is also sometimes extended to mean any kind of expert, especially with the connotation of having particularly detailed knowledge or of cleverly circumventing limits. The meaning of the term, when used in a computer context, has changed somewhat over the decades since it first came into use, as it has been given additional and clashing meanings by new users of the word.

          Currently, "hacker" is used in two main ways, one positive and one pejorative. It can be used in the computing community to describe a particularly brilliant programmer or technical expert (). This is said by some to be the "correct" usage of the word (see the Jargon File definition below). In popular usage and in the media, however, it generally describes computer intruders or criminals. "Hacker" can be seen as a shibboleth, identifying those who use it in its positive sense as members of the computing community.


3D Searching


          The 3D-search system uses algorithms to convert the selected or drawn image-based query into amathematical model that describes the features of the object being sought. This converts drawings andobjects into a form that computers can work with. The search system then compares the mathematicaldescription of the drawn or selected object to those of 3D objects stored in a database, looking forsimilarities in the described features.
          The key to the way computer programs look for 3D objects is the voxel (volume pixel). A voxel is a set of graphical data-such as position, color, and density-that defines the smallest cubeshaped building blockof a 3D image. Computers can display 3D images only in two dimensions. To do this, 3D renderingsoftware takes an object and slices it into 2D cross sections. The cross sections consist of pixels (pictureelements), which are single points in a 2D image. To render the 3D image on a 2D screen, the computerdetermines how to display the 2D cross sections stacked on top of each other, using the applicableinterpixel and interslice distances to position them properly. The computer interpolates data to fill ininterslice gaps and create a solid image.
          True 3D search systems offer two principal ways to formulate a query: Users can select objects from acatalog of images based on product groupings, such as gears or sofas; or they can utilize a drawingprogram to create a picture of the object they are looking for. or example, Princeton's 3D search engineuses an application to let users draw a 2D or 3D representation of the object they want to find.


CAPTCHA

          You're trying to sign up for a free email service offered by Gmail or Yahoo. Before you can submit your application, you first have to pass a test. It's not a hard test -- in fact, that's the point. For you, the test should be simple and straightforward. But for a computer, the test should be almost impossible to solve.

          This sort of test is a CAPTCHA. They're also known as a type of Human Interaction Proof (HIP). You've probably seen CAPTCHA tests on lots of Web sites. The most common form of CAPTCHA is an image of several distorted letters. It's your job to type the correct series of letters into a form. If your letters match the ones in the distorted image, you pass the test.

          

          CAPTCHAs are short for Completely Automated Public Turing test to tell Computers and Humans Apart. The term "CAPTCHA" was coined in 2000 by Luis Von Ahn, Manuel Blum, Nicholas J. Hopper (all of Carnegie Mellon University, and John Langford (then of IBM). They are challenge-response tests to ensure that the users are indeed human. The purpose of a CAPTCHA is to block form submissions from spam bots – automated scripts that harvest email addresses from publicly available web forms. A common kind of CAPTCHA used on most websites requires the users to enter the string of characters that appear in a distorted form on the screen.

          
          CAPTCHAs are used because of the fact that it is difficult for the computers to extract the text from such a distorted image, whereas it is relatively easy for a human to understand the text hidden behind the distortions. Therefore, the correct response to a CAPTCHA challenge is assumed to come from a human and the user is permitted into the website.