Thursday, 31 December 2015

HTAM

Abstract

Intel’s Hyper-Threading Technology brings the concept of simultaneous multi-threading to the Intel Architecture. Hyper-Threading Technology makes a single physical processor appear as two logical processors; the physical execution resources are shared and the architecture state is duplicated for the two logical processors.

  • From a software or architecture perspective, this means operating systems and user programs can schedule processes or threads to logical processors as they would on multiple physical processors. 
  • From a micro architecture perspective, this means that instructions from both logical processors will persist and execute simultaneously on shared execution resources.

Hyper-Threading Technology makes a single physical processor appear as multiple logical processors [11, 12]. To do this, there is one copy of the architecture state for each logical processor, and the logical processors share a single set of physical execution resources. From a software or architecture perspective, this means operating systems and user programs can schedule processes or threads to logical processors as they would on conventional physical processors in a multiprocessor system. From a micro architecture perspective, this means that instructions from logical processors will persist and execute simultaneously on shared execution resources.
The first implementation of Hyper-Threading Technology is being made available on the Intel. Xeon processor family for dual and multiprocessor servers, with two logical processors per physical processor. By more efficiently using existing processor resources, the Intel Xeon processor family can significantly improve performance at virtually the same system cost. This implementation of Hyper-Threading Technology added less than 5% to the relative chip size and maximum power requirements, but can provide performance benefits much greater than that.
Each logical processor maintains a complete set of the architecture state. The architecture state consists of registers including the general-purpose registers, the control registers, the advanced programmable interrupt controller (APIC) registers, and some machine state registers. From a software perspective, once the architecture state is duplicated, the processor appears to be two processors.
The number of transistors to store the architecture state is an extremely small fraction of the total. Logical processors share nearly all other resources on the physical processor, such as caches, execution units, branch predictors, control logic, and buses. Each logical processor has its own interrupt controller or APIC. Interrupts sent to a specific logical only that logical processor handles processors.

BENEFITS OF HYPER THREADING TECHNOLOGY

High processor utilization rates: One processor with two architectural states enable the processor to more efficiently utilize execution resources. Because the two threads share one set of execution resources, the second thread can use resources that would be otherwise idle if only one thread was executing. The result is an increased utilization of the execution resources within each physical processor package.
 Higher performance for properly optimized software: Greater throughput is achieved when software is multi threaded in a way that allows different threads to tap different processor resources in parallel. For example, Integer operations are scheduled on one logical processor while floating point computations occur on the other.
Full backward compatibility: Virtually all multiprocessor-aware operating systems and multi threaded applications benefit from Hyper- Threading technology. Software that lacks multiprocessor capability is unaffected by Hyper-Threading technology.

Haptic Technology

Abstract

The basic idea of haptic devices is to provide users with a force feedback information on the motion and/or force that they generate. Haptic devices are useful for tasks where visual information is not sufficient and may induce unacceptable manipulation errors, for example surgery or teleoperation in radioactive/chemical environments.
The aim of haptic devices is to provide the user with a feeling of the situation. In this article we will try to review a particular type of haptic devices, namely those based on parallel mechanisms.
A haptic technology is a force or tactile feedback technology, which allows a user to touch, feel, manipulate, create, and/or alter simulated three-dimensional objects in a virtual environment. Such an interface could be used to train physical skills such as those jobs requiring specialized hand-held tools, for instance, surgeons, astronauts, and mechanics.Or to enable modeling of three dimensional objects without a physical medium such as automobile body designers working with clay models, to mock-up developmental prototypes directly from CAD databases rather than in a machine shop using virtual reality modeling language in conjunction with haptic technology.In addition, haptic help doctors to locate any change in temperature, or tumor in certain part of body without physically being there.
The term haptic is derived from the greek word 'haphe' which means pertaining to touch. The scientific term "haptics" refers to sensing and manipulation through the sense of touch. Although the word haptics may be new to many users, chances are that they are already using haptic interfaces.

Applications Of Haptic Technology

Haptic technology finds wide range of applications as mentioned below:
 Surgical simulation and medical training.

 Physical rehabilitation.

 Training and education.

• Museum display

 Painting, sculpting and CAD.

• Scientific visualization.

• Military application.

Haptic interface                                                                                

During the spring of 1993, MIT's work on haptic has introduced a new haptic interface that came to be called PHANTOM. This quickly commercialized due to strong interest from many colleagues and technically progressive corporations.Right now there are more than hundreds of PHANTOM haptic interfaces worldwide, this could represent an emerging market for haptic interface device.The PHANTOM interface is an electromechanical device small enough to sit on the surface of a desk and connects to a computer's input/output port.

Diamond chip

Abstract

Electronics without silicon is unbelievable, but it will come true with the evolution of Diamond or Carbon chip. Now a day we are using silicon for the manufacturing of Electronic Chip's.It has many disadvantages when it is used in power electronic applications, such as bulk in size, slow operating speed etc. Carbon, Silicon and Germanium are belonging to the same group in the periodic table.
They have four valance electrons in their outer shell. Pure Silicon and Germanium are semiconductors in normal temperature. So in the earlier days they are used widely for the manufacturing of electronic components. But later it is found that Germanium has many disadvantages compared to silicon, such as large reverse current, less stability towards temperature etc so in the industry focused in developing electronic components using silicon wafers
Now research people found that Carbon is more advantages than Silicon. By using carbon as the manufacturing material, we can achieve smaller, faster and stronger chips. They are succeeded in making smaller prototypes of Carbon chip. They invented a major component using carbon that is "CARBON NANOTUBE", which is widely used in most modern microprocessors and it will be a major component in the Diamond chip.

WHAT IS IT?

In single definition, Diamond Chip or carbon Chip is an electronic chip manufactured on a Diamond structural Carbon wafer. OR it can be also defined as the electronic component manufactured using carbon as the wafer. The major component using carbon is (cnt) Carbon Nanotube. Carbon Nanotube is a nano dimensional made by using carbon. It has many unique properties.


HOW IS IT POSSIBLE?

Pure Diamond structural carbon is non-conducting in nature. In order to make it conducting we have to perform doping process. We are using Boron as the p-type doping Agent and the Nitrogen as the n-type doping agent. The doping process is similar to that in the case of Silicon chip manufacturing. But this process will take more time compared with that of silicon because it is very difficult to diffuse through strongly bonded diamond structure. CNT (Carbon Nanotube) is already a semi conductor.

ADVANTAGES OF DIAMOND CHIP

1 SMALLER COMPONENTS ARE POSSIBLE

As the size of the carbon atom is small compared with that of silicon atom, it is possible to etch very smaller lines through diamond structural carbon. We can realize a transistor whose size is one in hundredth of silicon transistor.

2 IT WORKS AT HIGHER TEMPERATURE

Diamond is very strongly bonded material. It can withstand higher temperatures compared with that of silicon. At very high temperature, crystal structure of the silicon will collapse. But diamond chip can function well in these elevated temperatures. Diamond is very good conductor of heat. So if there is any heat dissipation inside the chip, heat will very quickly transferred to the heat sink or other cooling mechanics.

3 FASTER THAN SILICON CHIP

Carbon chip works faster than silicon chip. Mobility of the electrons inside the doped diamond structural carbon is higher than that of in he silicon structure. As the size of the silicon is higher than that of carbon, the chance of collision of electrons with larger silicon atoms increases. But the carbon atom size is small, so the chance of collision decreases. So the mobility of the charge carriers is higher in doped diamond structural carbon compared with that of silicon.

4 LARGER POWER HANDLING CAPACITY

For power electronics application silicon is used, but it has many disadvantages such as bulk in size, slow operating speed, less efficiency, lower band gap etc at very high voltages silicon structure will collapse. Diamond has a strongly bonded crystal structure. So carbon chip can work under high power environment. It is assumed that a carbon transistor will deliver one watt of power at rate of 100 GHZ. Now days in all power electronic circuits, we are using certain circuits like relays, or MOSFET inter connection circuits (inverter circuits) for the purpose of interconnecting a low power control circuit with a high power circuit .If we are using carbon chip this inter phase is not needed. We can connect high power circuit direct to the diamond chip

NEW TECH: (CONSTRUCTIVE DESTRUCTION) BY IBM:

Constructive destruction is new technique invented by IBM. It not a manufacturing method of carbon nanotube; but it is manipulated method. By the three methods mentioned above, we are getting both singled walled carbon nanotube and multi walled carbon nanotubes. All electronic applications require semi conducting carbon nanotubes. This method to separate semi conducting carbon nanotubes from metallic carbon nanotubes. It follows below step –
1. Deposit stick together carbon nanotubes on silicon dioxide water.
2. Use photographic methods to make electrodes on both ends of the carbon nanotube.
3. Make a gate electrode on the silicon dioxide water.
4. Using silicon dioxide water it self as the electrode, the scientist’s switched off. The semi conducting nanotubes by applying appropriate voltage to the silicon dioxide water. Which blocks any current flow through the semi conducting nanotube.
5. The metallic carbon nanotubes are unprotected, and appropriate carbon voltage is applied to destroy the metallic carbon nanotubes.
6. Result:- a dense array of un- harmed semi conducting carbon nanotubes are formed, which can be used for transistor manufacturing.

Atomic Force Microscope (AFM)

"Atomic force microscopy (AFM) is a method of measuring surface topography on a scale from angstroms to 100 microns. The technique involves imaging a sample through the use of probe or tip, with a radius of 20nm. The tip is held several nanometers above the surface using a feedback mechanism that measures surface-tip interaction on the scale of nano Newton’s. Variation in tip height are recorded while the tip is scanned repeatedly"
Across the sample, producing a topographic image of the surface. In addition to basic AFM, the instrument in the Microscopy Suite is capable of producing images in a number of other modes, including tapping, magnetic force, and pulsed force. In tapping mode, the tip is oscillated above the sample surface, and data may be collected from interactions with surface topography, stiffness, and adhesion.
This result in an expanded number of image contrast methods compared to basic AFM. Magnetic force mode imaging utilizing a magnetic tip to enable the visualization of magnetic domains on the sample. In electrical force mode imaging a charged tip is used to locate and record variations in surface charge. In pulsed force mode (Witec), the sample is oscillated beneath the tip, and a serious of pseudo force – distance curves are generated. This permits the separation of sample topography, stiffness, and adhesion values, producing three independent images. Or three individual sets of data , Simultaneously

3D Searching

Abstract

From computer-aided design (CAD) drawings of complex engineering parts to digital representations of proteins and complex molecules, an increasing amount of 3D information is making its way onto the Web and into corporate databases.Because of this, users need ways to store, index, and search this information. Typical Web-searching approaches, such as Google's, can't do this. Even for 2D images, they generally search only the textual parts of a file, noted Greg Notess, editor of the online Search Engine Showdown newsletter.
However, researchers at universities such as Purdue and Princeton have begun developing search engines that can mine catalogs of 3D objects, such as airplane parts, by looking for physical, not textual, attributes. Users formulate a query by using a drawing application to sketch what they are looking for or by selecting a similar object from a catalog of images. The search engine then finds the items they want. The company must make it again, wasting valuable time and money.

3D SEARCHING Introduction

Advances in computing power combined with interactive modeling software, which lets users create images as queries for searches, have made 3Dsearch technology possible.
Methodology used involves the following steps
  • " Query formulation
  • " Search process
  • " Search result

QUERY FORMULATION

True 3D search systems offer two principal ways to formulate a query: Users can select objects from a catalog of images based on product groupings, such as gears or sofas; or they can utilize a drawing program to create a picture of the object they are looking for. or example, Princeton's 3D search engine uses an application to let users draw a 2D or b of the object they want to find.
The  picture shows the query interface of a 3D search system.

SEARCH PROCESS

The 3D-search system uses algorithms to convert the selected or drawn image-based query into a mathematical model that describes the features of the object being sought. This converts drawings and objects into a form that computers can work with. The search system then compares the mathematical description of the drawn or selected object to those of 3D objects stored in a database, looking for similarities in the described features.The key to the way computer programs look for 3D objects is the voxel (volume pixel). 
A voxel is a set of graphical data-such as position, color, and density-that defines the smallest cubeshaped building block of a 3D image. Computers can display 3D images only in two dimensions. To do this, 3D rendering software takes an object and slices it into 2D cross sections. The cross sections consist of pixels (picture elements), which are single points in a 2D image.
To render the 3D image on a 2D screen, the computer determines how to display the 2D cross sections stacked on top of each other, using the applicable interpixel and interslice distances to position them properly. The computer interpolates data to fill in interslice gaps and create a solid image.

Wednesday, 30 December 2015

Biological Computers

Abstract

Biological computers have emerged as an interdisciplinary field that draws together molecular biology, chemistry, computer science and mathematics. The highly predictable hybridization chemistry of DNA, the ability to completely control the length and content of oligonucleotides, and the wealth of enzymes available for modification of the DNA, make the use of nucleic acids an attractive candidate for all of these nanoscale applications.
A 'DNA computer' has been used for the first time to find the only correct answer from over a million possible solutions to a computational problem. Leonard Adleman of the University of Southern California in the US and colleagues used different strands of DNA to represent the 20 variables in their problem, which could be the most complex task ever solved without a conventional computer. The researchers believe that the complexity of the structure of biological molecules could allow DNA computers to outperform their electronic counterparts in future.

Scientists have previously used DNA computers to crack computational problems with up to nine variables, which involves selecting the correct answer from 512 possible solutions. But now Adleman's team has shown that a similar technique can solve a problem with 20 variables, which has 220 - or 1 048 576 - possible solutions.
Adleman and colleagues chose an 'exponential time' problem, in which each extra variable doubles the amount of computation needed. This is known as an NP-complete problem, and is notoriously difficult to solve for a large number of variables. Other NP-complete problems include the 'travelling salesman' problem - in which a salesman has to find the shortest route between a number of cities - and the calculation of interactions between many atoms or molecules.
Adleman and co-workers expressed their problem as a string of 24 'clauses', each of which specified a certain combination of 'true' and 'false' for three of the 20 variables. The team then assigned two short strands of specially encoded DNA to all 20 variables, representing 'true' and 'false' for each one.
In the experiment, each of the 24 clauses is represented by a gel-filled glass cell. The strands of DNA corresponding to the variables - and their 'true' or 'false' state - in each clause were then placed in the cells.
Each of the possible 1,048,576 solutions were then represented by much longer strands of specially encoded DNA, which Adleman's team added to the first cell. If a long strand had a 'subsequence' that complemented all three short strands, it bound to them. But otherwise it passed through the cell.
To move on to the second clause of the formula, a fresh set of long strands was sent into the second cell, which trapped any long strand with a 'subsequence' complementary to all three of its short strands. This process was repeated until a complete set of long strands had been added to all 24 cells, corresponding to the 24 clauses. The long strands captured in the cells were collected at the end of the experiment, and these represented the solution to the problem.

DOCTOR IN A CELL

In previous Biological computers produced input, output and "software" are all composed of DNA, the material of genes, while DNA-manipulating enzymes are used as "hardware." The newest version's input apparatus is designed to assess concentrations of specific RNA molecules, which may be overproduced or under produced, depending on the type of cancer. Using pre-programmed medical knowledge, the computer then makes its diagnosis based on the detected RNA levels.
In response to a cancer diagnosis, the output unit of the computer can initiate the controlled release of a single-stranded DNA molecule that is known to interfere with the cancer cell's activities, causing it to self-destruct.

Rain Technology

Abstract

Rain finity's technology originated in a research project at the California Institute of Technology (Caltech), in collaboration with NASA's Jet Propulsion Laboratory and the Defense Advanced Research Projects Agency (DARPA).The name of the original research project was RAIN, which stands for Reliable Array of Independent Nodes. The goal of the RAIN project was to identify key software building blocks for creating reliable distributed applications using off-the-shelf hardware.
The focus of the research was on high-performance, fault-tolerant and portable clustering technology for space-borne computing. Two important assumptions were made, and these two assumptions reflect the differentiations between RAIN and a number of existing solutions both in the industry and in academia:
1. The most general share-nothing model is assumed. There is no shared storage accessible from all computing nodes. The only way for the computing nodes to share state is to communicate via a network. This differentiates RAIN technology from existing back-end server clustering solutions such as SUNcluster, HP MC Serviceguard or Microsoft Cluster Server.
2. The distributed application is not an isolated system. The distributed protocols interact closely with existing networking protocols so that a RAIN cluster is able to interact with the environment. Specifically, technological modules were created to handle high-volume network-based transactions. This differentiates it from traditional distributed computing projects such as Beowulf.
In short, the RAIN project intended to marry distributed computing with networking protocols. It became obvious that RAIN technology was well-suited for Internet applications. During the RAIN project, key components were built to fulfill this vision. A patent was filed and granted for the RAIN technology. Rainfinity was spun off from Caltech in 1998, and the company has exclusive intellectual property rights to the RAIN technology. After the formation of the company, the RAIN technology has been further augmented, and additional patents have been filed.
The guiding concepts that shaped the architecture are as follows:

1. Network Applications

The architecture goals for clustering data network applications are different from clustering data storage applications. Similar goals apply in the telecom environment that provides the Internet backbone infrastructure, due to the nature of applications and services being clustered.

2. Shared-Nothing

The shared-storage cluster is the most widely used for database and application servers that store persistent data on disks. This type of cluster typically focuses on the availability of the database or application service, rather than performance. Recovery from failover is generally slow, because restoring application access to disk-based data takes minutes or longer, not seconds. Telecom servers deployed at the edge of the network are often diskless, keeping data in memory for performance reasons, and tolerate low failover time. Therefore, a new type of share-nothing cluster with rapid failure detection and recovery is required. The only way for the shared-nothing cluster to share is to communicate via the network.

3. Scalability

While the high-availability cluster focuses on recovery from unplanned and planned downtimes, this new type of cluster must also be able to maximize I/O performance by load balancing across multiple computing nodes. Linear scalability with network throughput is important. In order to maximize the total throughput, load load-balancing decisions must be made dynamically by measuring the current capacity of each computing node in real-time. Static hashing does not guarantee
an even distribution of traffic.

4. Peer-to-Peer

A dispatcher-based, master-slave cluster architecture suffers from scalability by introducing a potential bottleneck. A peer-to-peer cluster architecture is more suitable for latency-sensitive data network applications processing shortlived sessions. A hybrid architecture should be considered to offset the need for more control over resource management. For example, a cluster can assign multiple authoritative computing nodes that process traffic in the round-robin order for each network interface that is clustered to reduce the overhead of traffic forwarding.

Bluetooth Broadcasting

Abstract

Bluetooth wireless technology (IEEE 802.15.1) is a short-range communications technology originally intended to replace the cables connecting portable and/or fixed devices while maintaining high levels of security.The key features of Bluetooth technology are threefold:
  • robustness
  • low power
  • low cost.
 Bluetooth has been designed in a uniform way. This way it enables a wide range of devices to connect and communicate with each other by using the Bluetooth wireless communication protocol. The Bluetooth technology has achieved global acceptance in such a way that any Bluetooth-enabled electronic device, almost everywhere in the world, is able to connect to other Bluetooth-enabled devices in its proximity.
Bluetooth-enabled electronic devices connect and communicate wirelessly through short-range, ad hoc networks known as piconets. Each device can simultaneously communicate with up to seven other devices within a single piconet. Each device can also belong to several piconets simultaneously. Piconets are established dynamically and automatically as Bluetooth-enabled devices enter and leave radio proximity. One of the main strengths of the Bluetooth wireless technology is the ability to handle data and voice transmissions simultaneously. This enables users to use a hands-free headset for voice calls, printing, fax capabilities, synchronizing PDA’s, laptops, and mobile phone applications to name a few.
An important aspect of this thesis is about the scalability of Bluetooth broadcasting. Since scalability can sometimes be a rather vague concept, we give a short explanation of the term. An important aspect of software products is how they are able to cope with growth. For example, how does the system handle an increase in users or data traffic? This property of a software system is usually referred to as scalability. A more detailed specification of the concept is given by AndrĂ© Bondi, who defines it as follows: ‘Scalability is a desirable attribute of a network, system, or process.
The concept connotes the ability of a system to accommodate an increasing number of elements or objects, to process growing volumes of work gracefully, and/or to be susceptible to enlargement.’ Whenever a system meets these requirements we can say that the system scales. In this thesis scalability comes down to the question if the system is capable of dealing with large groups of users equipped with Bluetooth enabled devices capable of receiving simple text messages.

Passive Broadcasting

The first type of business deals with broadcasting from a central location, which we will call passive broadcasting. Most of these companies sell both the hardware and software to enable this. For example, BlueCasting by Filter WorldWide, one of the major players in the market which made the news in August 2005 when they distributed merchandise for the British pop band Coldplay, offers a product family divided into four types of systems. They offer solutions for small retail shops, one-off events such as music festivals, and even larger areas such as airports and train stations.
The latest descendant in the family is a system that provides an interactive touchscreen allowing users to interact directly with the system. BlueCasting is an example of a product that comes with both hardware (one or more BlueCast Servers) and software (BlueCast Campaign Management System) which is used to provide remote setup, maintenance and reporting.Besides this type of companies, i.e. the ones that are selling the total package, other companies have dedicated themselves to providing just the hardware. An example is BlueGiga. According to their website their BlueGiga Access Servers are used by more than 350 Bluetooth Marketing companies in more than 65 countries. They sell two lines of products: Bluetooth Modules and Bluetooth Access Servers. The modules are described as ‘completely integrated, certified, high-performance Radio Frequency products including all needed Bluetooth profiles’.
Access Servers are sold in the form of Access Points (up to 7 connections) and Access Servers (up to 21 connections). Besides this they also sell the BlueGiga Solution Manager (BSM). This is a web-based remote management and monitoring platform for BlueGiga Access Servers that can be used to simultaneously upgrade, monitor and configure a large number of BlueGiga Access Servers, instead of configuring each device one-by-one.

Bluetooth Core System Architecture :

The transceiver operates in the globally unlicensed ISM band at 2.4 GHz. The bit rate
is 1 Megabit per second and can be boosted to 2 or 3 Mb/s with Enhanced Data Rate[EDR]. The 79 channels in the band are ordered from channel number 0-78 and are spaced 1 MHz beginning at 2402 GHz. Bluetooth-enabled devices that are communicating share a radio channel and are synchronized to a common clock and frequency hopping pattern. Frequency hopping is used to make the protocol more-robust to interference from other devices operating in the same band.
The physical channel is sub-divided into time units known as slots. Data is transmitted between Bluetooth-enabled devices in packets. These packets are situated in the slots. Packets can fill one or more consecutive slots, allowing larger data chunks to be transmitted if the circumstances admit this. Bluetooth is primarily designed for low power consumption and affordability and has a relatively short range (1, 10 or 100 meters). It makes use of low-cost transceiver microchips that are embedded in each device.
The Bluetooth Base band is the part of the Bluetooth system that specifies or implements the medium access and physical layer procedures between Bluetooth devices. Several devices can be joined together in what is called a piconet. One device owns the clock and the frequency hopping pattern and is called the master. Two or more piconets can be joined in what is called a scatternet. To form a scatternet, some units, called gateways, belong to different piconets. Such a unit can be a slave unit in more than one piconet but can act as a master in only one.

Besides this, it can transmit and receive data in only one piconet at a time. To visualize this, imagine the following. You are on the phone with a friend, using your Bluetooth headset, while at the same time you are uploading pictures from your computer to your phone. Your phone now acts as a gateway, being the master in the piconet with your headset and slave in the one with your computer.


3d Optical Data Storage

Abstract

3D optical data storage is the term given to any form of optical data storage in which information can be recorded and/or read with three dimensional resolution (as opposed to the two dimensional resolution afforded, for example, by CD).This innovation has the potential to provide petabyte-level mass storage on DVD-sized disks. Data recording and read back are achieved by focusing lasers within the medium. However, because of the volumetric nature of the data structure, the laser light must travel through other data points before it reaches the point where reading or recording is desired.
Therefore, some kind of non-linearity is required to ensure that these other data points do not interfere with the addressing of the desired point. No commercial product based on 3D optical data storage has yet arrived on the mass market, although several companies are actively developing the technology and claim that it may become available soon.
The origins of the field date back to the 1950s, when Yehuda Hirshberg developed the photochromicspiropyrans and suggested their use in data storage. In the 1970s, ValeriBarachevskii demonstrated that this photochromism could be produced by two-photon excitation, and finally at the end of the 1980s Peter T. Rentzepis showed that this could lead to three-dimensional data storage. This proof-of-concept system stimulated a great deal of research and development, and in the following decades many academic and commercial groupshave worked on 3D optical data storage products and technologies. Most of the developed systems are based to some extent on the original ideas of Rentzepis.
A wide range of physical phenomena for data reading and recording have been investigated, large numbers of chemical systems for the medium have been developed and evaluated, and extensive work has been carried out in solving the problems associated with the optical systems required for the reading and recording of data. Currently, several groups remain working on solutions with various levels of development and interest in commercialization.

Optical Recording Technology

Optical storage systems consist of a drive unit and a storage medium in a rotating disk form. In general the disks are pre-formatted using grooves and lands (tracks) to enable the positioning of an optical pick-up and recording head to access the information on the disk. Under the influence of a focused laser beam emanating from the optical head, information is recorded on the media as a change in the material characteristics. The disk media and the pick-up head are rotated and positioned through drive motors controlling the position of the head with respect to data tracks on the disk. Additional peripheral electronics are used for control and data acquisition and encoding/decoding.
As an example, a prototypical 3D optical data storage system may use a disk that looks much like a transparent DVD. The disc contains many layers of information, each at a different depth in the media and each consisting of a DVD-like spiral track. In order to record information on the disc a laser is brought to a focus at a particular depth in the media that corresponds to a particular information layer. When the laser is turned on it causes a photochemical change in the media. As the disc spins and the read/write head moves along a radius, the layer is written just as a DVD-R is written. The depth of the focus may then be changed and another entirely different layer of information written. The distance between layers may be 5 to 100 micrometers, allowing >100 layers of information to be stored on a single disc.In order to read the data back (in this example), a similar procedure is used except this time instead of causing a photochemical change in the media the laser causes fluorescence. This is achieved e.g. by using a lower laser power or a different laser wavelength. The intensity or wavelength of the fluorescence is different depending on whether the media has been written at that point, and so by measuring the emitted light the data is read.
The size of individual chromophore molecules or photoactive color centers is much smaller than the size of the laser focus (which is determined by the diffraction limit). The light therefore addresses a large number (possibly even 109) of molecules at any one time, so the medium acts as a homogeneous mass rather than a matrix structured by the positions of chromophores

Comparison with Holographic Data Storage:

3D optical data storage is related to (and competes with) holographic data storage. Traditional examples of holographic storage do not address in the third dimension, and are therefore not strictly "3D", but more recently 3D holographic storage has been realized by the use of micro-holograms. Layer-selection multi layer technology (where a multi layer disc has layers that can be individually activated e.g. electrically) is also closely related.
Holographic data storage is a potential replacement technology in the area of high-capacity data storage currently dominated by magnetic and conventional optical data storage. Magnetic and optical data storage devices rely on individual bits being stored as distinct magnetic or optical changes on the surface of the recording medium. Holographic data storage overcomes this limitation by recording information throughout the volume of the medium and is capable of recording multiple images in the same area utilizing light at different angles.
Additionally, whereas magnetic and optical data storage records information a bit at a time in a linear fashion, holographic storage is capable of recording and reading millions of bits in parallel, enabling data transfer rates greater than those attained by traditional optical storage.
The stored data is read through the reproduction of the same reference beam used to create the hologram. The reference beam’s light is focused on the photosensitive material, illuminating the appropriate interference pattern, the light diffracts on the interference pattern, and projects the pattern onto a detector. The detector is capable of reading the data in parallel, over one million bits at once, resulting in the fast data transfer rate. Files on the holographic drive can be accessed in less than 200 milliseconds.



Tuesday, 29 December 2015

Li-Fi Technology

Abstract

Whether you’re using wireless internet in a coffee shop, stealing it from the guy next door, or competing for bandwidth at a conference, you’ve probably gotten frustrated at the slow speeds you face when more than one device is tapped into the network.As more and more people and their many devices access wireless internet, clogged airwaves are going to make it increasingly difficult to latch onto a reliable signal. But radio waves are just one part of the spectrum that can carry our data. What if we could use other waves to surf the internet?
One German physicist,DR. Harald Haas, has come up with a solution he calls “Data Through Illumination”—taking the fiber out of fiber optics by sending data through an LED light bulb that varies in intensity faster than the human eye can follow. It’s the same idea behind infrared remote controls, but far more powerful. Haas says his invention, which he calls D-Light, can produce data rates faster than 10 megabits per second, which is speedier than your average broadband connection. He envisions a future where data for laptops, smartphones, and tablets is transmitted through the light in a room. And security would be a snap—if you can’t see the light, you can’t access the data.
Li-Fi is a VLC, visible light communication, technology developed by a team of scientists including Dr Gordon Povey, Prof. Harald Haas and Dr Mostafa Afgani at the University of Edinburgh. The term Li-Fi was coined by Prof. Haas when he amazed people by streaming high-definition video from a standard LED lamp, at TED Global in July 2011. Li-Fi is now part of the Visible Light Communications (VLC) PAN IEEE 802.15.7 standard. “Li-Fi is typically implemented using white LED light bulbs. These devices are normally used for illumination by applying a constant current through the LED. However, by fast and subtle variations of the current, the optical output can be made to vary at extremely high speeds. Unseen by the human eye, this variation is used to carry high-speed data,” says Dr Povey, , Product Manager of the University of Edinburgh's Li-Fi Program ‘D-Light Project’.

Introduction of Li-Fi Technology

In simple terms, Li-Fi can be thought of as a light-based Wi-Fi. That is, it uses light instead of radio waves to transmit information. And instead of Wi-Fi modems, Li-Fi would use transceiver-fitted LED lamps that can light a room as well as transmit and receive information. Since simple light bulbs are used, there can technically be any number of access points.
This technology uses a part of the electromagnetic spectrum that is still not greatly utilized- The Visible Spectrum. Light is in fact very much part of our lives for millions and millions of years and does not have any major ill effect. Moreover there is 10,000 times more space available in this spectrum and just counting on the bulbs in use, it also multiplies to 10,000 times more availability as an infrastructure, globally.
It is possible to encode data in the light by varying the rate at which the LEDs flicker on and off to give different strings of 1s and 0s. The LED intensity is modulated so rapidly that human eyes cannot notice, so the output appears constant.
More sophisticated techniques could dramatically increase VLC data rates. Teams at the University of Oxford and the University of Edinburgh are focusing on parallel data transmission using arrays of LEDs, where each LED transmits a different data stream. Other groups are using mixtures of red, green and blue LEDs to alter the light's frequency, with each frequency encoding a different data channel.
Li-Fi, as it has been dubbed, has already achieved blisteringly high speeds in the lab. Researchers at the Heinrich Hertz Institute in Berlin, Germany, have reached data rates of over 500 megabytes per second using a standard white-light LED. Haas has set up a spin-off firm to sell a consumer VLC transmitter that is due for launch next year. It is capable of transmitting data at 100 MB/s - faster than most UK broadband connections.

Genesis of LI-FI:


Harald Haas, a professor at the University of Edinburgh who began his research in the field in 2004, gave a debut demonstration of what he called a Li-Fi prototype at the TEDGlobal conference in Edinburgh on 12th July 2011. He used a table lamp with an LED bulb to transmit a video of blooming flowers that was then projected onto a screen behind him. During the event he periodically blocked the light from lamp to prove that the lamp was indeed the source of incoming data. At TEDGlobal, Haas demonstrated a data rate of transmission of around 10Mbps -- comparable to a fairly good UK broadband connection. Two months later he achieved 123Mbps.

How Li-Fi Works?

Li-Fi is typically implemented using white LED light bulbs at the downlink transmitter. These devices are normally used for illumination only by applying a constant current. However, by fast and subtle variations of the current, the optical output can be made to vary at extremely high speeds. This very property of optical current is used in Li-Fi setup. The operational procedure is very simple-, if the LED is on, you transmit a digital 1, if it’s off you transmit a 0. The LEDs can be switched on and off very quickly, which gives nice opportunities for transmitting data. Hence all that is required is some LEDs and a controller that code data into those LEDs. All one has to do is to vary the rate at which the LED’s flicker depending upon the data we want to encode.
Further enhancements can be made in this method, like using an array of LEDs for parallel data transmission, or using mixtures of red, green and blue LEDs to alter the light’s frequency with each frequency encoding a different data channel. Such advancements promise a theoretical speed of 10 Gbps – meaning one can download a full high-definition film in just 30 seconds.
TTo further get a grasp of Li-Fi consider an IR remote.(fig 3.3). It sends a single data stream of bits at the rate of 10,000-20,000 bps. Now replace the IR LED with a Light Box containing a large LED array. This system, fig 3.4, is capable of sending thousands of such streams at very fast rate.
Light is inherently safe and can be used in places where radio frequency communication is often deemed problematic, such as in aircraft cabins or hospitals. So visible light communication not only has the potential to solve the problem of lack of spectrum space, but can also enable novel application. The visible light spectrum is unused, it's not regulated, and can be used for communication at very high speeds.