Li-Fi Technology

In simple terms, Li-Fi can be thought of as a light-based Wi-Fi. That is, it uses light instead of radio waves to transmit information. And instead of Wi-Fi modems, Li-Fi would use transceiver-fitted LED lamps that can light a room as well as transmit and receive information. Since simple light bulbs are used, there can technically be any number of access points.

This technology uses a part of the electromagnetic spectrum that is still not greatly utilized- The Visible Spectrum. Light is in fact very much part of our lives for millions and millions of years and does not have any major ill effect. Moreover there is 10,000 times more space available in this spectrum and just counting on the bulbs in use, it also multiplies to 10,000 times more availability as an infrastructure, globally.

It is possible to encode data in the light by varying the rate at which the LEDs flicker on and off to give different strings of 1s and 0s. The LED intensity is modulated so rapidly that human eyes cannot notice, so the output appears constant.

More sophisticated techniques could dramatically increase VLC data rates. Teams at the University of Oxford and the University of Edinburgh are focusing on parallel data transmission using arrays of LEDs, where each LED transmits a different data stream. Other groups are using mixtures of red, green and blue LEDs to alter the light's frequency, with each frequency encoding a different data channel.


Li-Fi, as it has been dubbed, has already achieved blisteringly high speeds in the lab. Researchers at the Heinrich Hertz Institute in Berlin, Germany, have reached data rates of over 500 megabytes per second using a standard white-light LED. Haas has set up a spin-off firm to sell a consumer VLC transmitter that is due for launch next year. It is capable of transmitting data at 100 MB/s - faster than most UK broadband connections.




Honeypot

          The Internet is growing fast and doubling its number of websites every 53 days and the number of people using the internet is also growing. Hence, global communication is getting more important every day. At the same time, computer crimes are also increasing. Countermeasures are developed to detect or prevent attacks - most of these measures are based on known facts, known attack patterns. Countermeasures such as firewalls and network intrusion detection systems are based on prevention, detection and reaction mechanism; but is there enough information about the enemy?

          As in the military, it is important to know, who the enemy is, what kind of strategy he uses, what tools he utilizes and what he is aiming for. Gathering this kind of information is not easy but important. By knowing attack strategies, countermeasure scan be improved and vulnerabilities can be fixed. To gather as much information as possible is one main goal of a honeypot. Generally, such information gathering should be done silently, without alarming an attacker. All the gathered information leads to an advantage on the defending side and can therefore be used on productive systems to prevent attacks.

          A honeypot is primarily an instrument for information gathering and learning. Its primary purpose is not to be an ambush for the blackhat community to catch them in action and to press charges against them. The focus lies on a silent collection of as much information as possible about their attack patterns, used programs, purpose of attack and the blackhat community itself. All this information is used to learn more about the blackhat proceedings and motives, as well as their technical knowledge and abilities. This is just a primary purpose of a honeypot. There are a lot of other possibilities for a honeypot - divert hackers from productive systems or catch a hacker while conducting an attack are just two possible examples. They are not the perfect solution for solving or preventing computer crimes.

          Honeypots are hard to maintain and they need operators with good knowledge about operating systems and network security. In the right hands, a honeypot can be an effective tool for information gathering. In the wrong, unexperienced hands, a honeypot can become another infiltrated machine and an instrument for the blackhat community.



Blue Gene Technology

A Blue Gene/P supercomputer at Argonne National Laboratory
          In November 2001 IBM announced a partnership with Lawrence Livermore National Laboratory to build the Blue Gene/L (BG/L) supercomputer, a 65,536-node machine designed around embedded PowerPC processors. Through the use of system-on-a-chip integration coupled with a highly scalable cellular architecture, Blue Gene/L will deliver 180 or 360 Teraflops of peak computing power, depending on the utilization mode.

The block scheme of the
Blue Gene/LASIC including
dual PowerPC 440 cores.
          Blue Gene/L represents a new level of scalability for parallel systems. Whereas existing large scale systems range in size from hundreds to a few of compute nodes, Blue Gene/L makes a jump of almost two orders of magnitude. Several techniques have been proposed for building such a powerful machine. Some of the designs call for extremely powerful (100 GFLOPS) processors based on superconducting technology. The class of designs that we focus on use current and foreseeable CMOS technology. It is reasonably clear that such machines, in the near future at least, will require a departure from the architectures of the current parallel supercomputers, which use few thousand commodity microprocessors. With the current technology, it would take around a million microprocessors to achieve a petaFLOPS performance.

          Clearly, power requirements and cost considerations alone preclude this option. The class of machines of interest to us use a “processorsin- memory” design: the basic building block is a single chip that includes multiple processors as well as memory and interconnection routing logic. On such machines, the ratio of memory-to-processors will be substantially lower than the prevalent one. As the technology is assumed to be the current generation one, the number of processors will still have to be close to a million, but the number of chips will be much lower. Using such a design, petaFLOPS performance will be reached within the next 2-3 years, especially since IBM hasannounced the Blue Gene project aimed at building such a machine.


One Blue Gene/L node board

          The system software for Blue Gene/L is a combination of standard and custom solutions. The software architecture for the machine is divided into three functional Entities arranged hierarchically: a computational core, a control infrastructure and a service infrastructure. The I/O nodes (part of the control infrastructure) execute a version of the Linux kernel and are the primary off-load engine for most system services. No user code directly executes on the I/O nodes.