Pages

Slide 1 title

Click to Next

Slide 2

Click to Next

Slide 3

Click to Next

Slide 4

Click to Next

Slide 5

Click to Next

Thursday, December 31, 2015

Robot



                                Introduction of Robot 


The term robot was coined by the Czech playwright Karel Capek (CHAH pek) from the Czech word for “forced labor” or “serf.” Capek was reportedly several times a candidate for the Nobel prize for his works and very influential and prolific as a writer and playwright. Fortunately, he died before the Gestapo got to him for his anti-Nazi sympathies in 1938. Capek used the word Robot in his play “R.U.R.” (“Rossum’s Universal Robots”) which opened in Prague in January, 1921, a play in which automata are mass-produced by an Englishman named Rossum. The automata, robots, are meant to do the world’s work and to make a better life for human beings; but in the end they rebel, wipe out humanity, and start a new race of intelligent life for themselves. Robot is one type of computer machine, It is non living things. Robot is most important for scintific research.

“Rossum comes from a Czech word, rozum, meaning ‘reason,’ and ‘intellect.’ The popularity of the play diminished the use of the old term automaton and robot has replaced it in just about every language, so that now a robot is commonly thought of as any artificial device (often pictured in at least vaguely human form) that will perform functions ordinarily thought to be appropriate for human beings.”
The play was an enormous success and productions soon opened throughout Europe and the US. R.U.R’s theme, in part, was the dehumanization of man in a technological civilization. You may find it surprising that the robots were not mechanical in nature but were created through chemical means. In fact, in an essay written in 1935, Capek strongly rejected the idea that it was at all possible to create such creatures and, writing in the third person, said: “It is with horror, frankly, that he rejects all responsibility for the idea that metal contraptions could ever replace human beings, and that by means of wires they could awaken something like life, love, or rebellion. He would deem this dark prospect to be either an overestimation of machines, or a grave offence against life.”
For many people it is a machine that imitates a human like the androids in Star Wars, Terminator and Star Trek: The Next Generation. However much these robots capture our imagination, such robots still only inhabit Science Fiction. People still haven't been able to give a robot enough 'common sense' to reliably interact with a dynamic world. However, Rodney Brooks and his team at MIT Artificial Intelligence Lab are working on creating such humanoid robots. The type of robots that you will encounter most frequently are robots that do work that is too dangerous, boring, onerous, or just plain nasty. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. In fact, there are over a million of these type of robots working for us today Some robots like the Mars Rover Sojourner and the upcoming Mars Exploration Rover, or the underwater robot Caribou help us learn about places that are too dangerous for us to go. While other types of robots are just plain fun for kids of all ages. Popular toys such as Teckno, Polly or AIBO ERS-220 seem to hit the store shelves every year around Christmas time. And as much fun as robots are to play with, robots are even much more fun to build. In Being Digital, Nicholas Negroponte tells a wonderful story about an eight year old, pressed during a televised premier of MITMedia Lab's LEGO/Logo work at Hennigan School. A zealous anchor, looking for a cute sound bite, kept asking the child if he was having fun playing with LEGO/Logo. Clearly exasperated, but not wishing to offend, the child first tried to put her off. After her third attempt to get him to talk about fun, the child, sweating under the hot television lights, plaintively looked into the camera and answered, "Yes it is fun, but it's hard fun."


Some important robot  characteristics:

- Sensing First of all your robot would have to be able to sense its surroundings. It would do this in ways that are not unsimilar to the way that you sense your surroundings. 
- Giving your robot sensors: light sensors (eyes), touch and - pressure sensors (hands), chemical sensors (nose), hearing and sonar sensors (ears), and taste sensors (tongue) will give your robot awareness of its environment.
- Movement A robot needs to be able to move around its environment. Whether rolling on wheels, walking on legs or propelling by thrusters a robot needs to be able to move. 
- To count as a robot either the whole robot moves, like the Sojourner or just parts of the robot moves, like the Canada Arm.
- Energy A robot needs to be able to power itself. A robot might be solar powered, electrically powered, battery powered. 
- The way your robot gets its energy will depend on what your robot needs to do.
-Intelligence A robot needs some kind of "smarts." This is where programming enters the pictures. A programmer is the person who gives the robot its 'smarts.
- The robot will have to have some way to receive the program so that it knows what it is to do.

The history of robots has its origins on the ancient world. The modern concept began to be developed with the onset of the Industrial Revolution which allowed for the use of complex mechanics and the subsequent introduction of electricity. This made it possible to power machines with small compact motors. In the early 20th century, the notion of a humanoid machine was developed. Today, it is now possible to envisage human sized robots with the capacity for near human thoughts and movement.

The first uses of modern robots were in factories as industrial robots – simple fixed machines capable of manufacturing tasks which allowed production without the need for human assistance. Digitally controlled industrial robots and robots making use of artificial intelligence have been built since the 1960s.

Early legends

Hephaestus, Greek god of craftsmen.
Concepts of artificial servants and companions date at least as far back as the ancient legends of Cadmus, who sowed dragon teeth that turned into soldiers, and the myth of Pygmalion whose statue of Galatea came to life. Many ancient mythologies included artificial people, such as the talking mechanical handmaidens built by the Greek god Hephaestus (Vulcan to the Romans) out of gold, the clay golems of Jewish legend and clay giants of Norse legend. Chinese legend relates that in the 10th century BC, Yan Shi made an automaton resembling a human in an account from the Lie Zi text.

In Greek mythology, Hephaestus created utilitarian three-legged tables that could move about under their own power and a bronze man, Talos, that defended Crete. Talos was eventually destroyed by Media who cast a lightning bolt at his single vein of lead. To take the golden fleece Jason was also required to tame two fire breathing bulls with bronze hooves; and like Cadmus he sowed the teeth of a dragon into soldiers.

The Indian Lokapannatti (11th/12th century) tells the story of King Ajatashatru of Magadha who gathered the Buddhas relics and hid them in an underground stupa. The Buddhas relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya; until they were disarmed by King Ashoka. In the Egyptian legend of Rocail, the younger brother of Seth created a palace and a sepulcher containing autonomous statues that lived out the lives of men so realistically they were mistaken for having souls.

In Christian legend, several of the men associated with the introduction of Arabic learning (and, through it, the reintroduction of Aristotle and Hero's works) to medieval Europe devised brazen heads that could answer questions posed to them. Albertus Magnus was supposed to have constructed an entire android who could perform some domestic tasks but was destroyed by Albert's student Thomas Aquinas for disturbing his thought. The most famous legend concerned a bronze head devised by Roger Bacon which was destroyed or scrapped after he missed its moment of operation.

Automata were popular in the imaginary worlds of medieval literature. For instance, the Middle Dutch tale Roman van Walewein ("The Romance of Walewein", early 13th century) described mechanical birds and angels producing sound by means of systems of pipes.

Early beginnings

The water-powered mechanism of Su Song's astronomical clock tower, featuring a clepsydra tank, waterwheel, escapement mechanism, and chain drive to power an armillary sphere and 113 striking clock jacks to sound the hours and to display informative plaques.
Concepts akin to a robot can be found as long ago as the 4th century BC, when the Greek mathematician Archytas of Tarentum postulated a mechanical bird he called "The Pigeon" which was propelled by steam. Yet another early automaton was the clepsydra, made in 250 BC by Ctesibius of Alexandria, a physicist and inventor from Ptolemaic Egypt. Hero of Alexandria (10–70 AD) made numerous innovations in the field of automata, including one that allegedly could speak.

Taking up the earlier reference in Homer's Iliad, Aristotle speculated in his Politics (ca. 322 BC, book 1, part 4) that automatons could someday bring about human equality by making possible the abolition of slavery:

There is only one condition in which we can imagine managers not needing subordinates, and masters not needing slaves. This condition would be that each instrument could do its own work, at the word of command or by intelligent anticipation, like the statues of Daedalus or the tripods made by Hephaestus, of which Homer relates that "Of their own motion they entered the conclave of Gods on Olympus", as if a shuttle should weave of itself, and a plectrum should do its own harp playing.
In ancient China, an account on automata is found in the Lie Zi text, written in the 3rd century BC, in which King Mu of Zhou (1023–957 BC) is presented with a life-size, human-shaped mechanical figure by Yan Shi, an "artificer".

The Cosmic Engine, a 10-metre (33 ft) clock tower built by Su Song in Kaifeng, China, in 1088, featured mechanical mannequins that chimed the hours, ringing gongs or bells among other devices.

Al-Jazari's programmable humanoid robots.
Al-Jazari (1136–1206), a Muslim inventor during the Artuqid dynasty, designed and constructed a number of automatic machines, including kitchen appliances, musical automata powered by water, and the first programmable humanoid robot in 1206. Al-Jazari's robot was a boat with four automatic musicians that floated on a lake to entertain guests at royal drinking parties. His mechanism had a programmable drum machine with pegs (cams) that bump into little levers that operate the percussion. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.


Tea-serving karakuri, with mechanism, 19th century. Tokyo National Science Museum.
Hero's works on automata were translated into Latin amid the 12th century Renaissance. The early 13th-century artist-engineer Villard de Honnecourt sketched plans for several automata. At the end of the thirteenth century, Robert II, Count of Artois, built a pleasure garden at his castle at Hesdin that incorporated a number of robots, humanoid and animal.
One of the first recorded designs of a humanoid robot was made by Leonardo da Vinci (1452–1519) in around 1495. Da Vinci's notebooks, rediscovered in the 1950s, contain detailed drawings of a mechanical knight in armour which was able to sit up, wave its arms and move its head and jaw. The design is likely to be based on his anatomical research recorded in the Vitruvian Man but it is not known whether he attempted to build the robot (see: Leonardo's robot). In 1533, Johannes Müller von Königsberg created an automaton eagle and fly made of iron; both could fly. John Dee is also known for creating a wooden beetle, capable of flying.

Around 1700, many automatons were built including ones capable of acting, drawing, flying, and playing music; some of the most famous works of the period were created by Jacques de Vaucanson in 1737, including an automaton flute player, tambourine player, and his most famous work, "The Digesting Duck". Vaucanson's duck was powered by weights and was capable of imitating a real duck by flapping its wings (over 400 parts were in each of the wings alone), eat grain, digest it, and defecate by excreting matter stored in a hidden compartment.
The Japanese craftsman Hisashige Tanaka, known as "Japan's Edison", created an array of extremely complex mechanical toys, some of which were capable of serving tea, firing arrows drawn from a quiver, or even painting a Japanese kanji character. The landmark text Karakuri Zui (Illustrated Machinery) was published in 1796.

Remote-controlled systems

The Brennan torpedo, one of the earliest "guided missiles".
Remotely operated vehicles were demonstrated in the late 19th century in the form of several types of remotely controlled torpedos. The early 1870s saw remotely controlled torpedos by John Ericsson (pneumatic), John Louis Lay (electric wire guided), and Victor von Scheliha (electric wire guided).
The Brennan torpedo, invented by Louis Brennan in 1877 was powered by two contra-rotating propellors that were spun by rapidly pulling out wires from drums wound inside the torpedo. Differential speed on the wires connected to the shore station allowed the torpedo to be guided to its target, making it "the world's first practical guided missile". In 1898 Nikola Tesla publicly demonstrated a "wireless" radio-controlled torpedo that he hoped to sell to the U.S. Navy.

Archibald Low was known as the "father of radio guidance systems" for his pioneering work on guided rockets and planes during the First World War. In 1917, he demonstrated a remote controlled aircraft to the Royal Flying Corps and in the same year built the first wire-guided rocket.

Humanoid robots
The term "robot" was first used to denote fictional automata in the 1921 play R.U.R. (Rossum's Universal Robots) by the Czech writer, Karel Capek. According to Capek, the word was created by his brother Josef from the Czech "robota", meaning servitude. The play, R.U.R, replaced the popular use of the word "automaton" with the word "robot." In 1927, Fritz Lang's Metropolis was released; the Maschinenmensch ("machine-human"), a gynoid humanoid robot, also called "Parody", "Futura", "Robotrix", or the "Maria impersonator" (played by German actress Brigitte Helm), was the first robot ever to be depicted on film. In many films, radio and television programs of the 1950s and before, the word “robot” was usually pronounced “robit,” even though it was spelled “bot” and not “bit.” Examples include “The Lonely” episode of the TV series “The Twilight Zone,” first aired on November 15, 1959, and all episodes of the sci-fi radio program “X Minus One.”

Many robots were constructed before the dawn of computer-controlled servomechanisms, for the public relations purposes of major firms. These were essentially machines that could perform a few stunts, like the automatons of the 18th century. In 1928, one of the first humanoid robots was exhibited at the annual exhibition of the Model Engineers Society in London. Invented by W. H. Richards, the robot Eric's frame consisted of an aluminium body of armour with eleven electromagnets and one motor powered by a twelve-volt power source. The robot could move its hands and head and could be controlled through remote control or voice control.

The first humanoid robot was a soldier with a trumpet, made in 1810 by Friedrich Kauffman in Dresden, Germany. The robot was on display until at least April 30, 1950.

Westinghouse Electric Corporation built Televox in 1926 – it was a cardboard cutout connected to various devices which users could turn on and off. In 1939, the humanoid robot known as Elektro was debuted at the World's Fair. Seven feet tall (2.1 m) and weighing 265 pounds (120.2 kg), it could walk by voice command, speak about 700 words (using a 78-rpm record player), smoke cigarettes, blow up balloons, and move its head and arms. The body consisted of a steel gear cam and motor skeleton covered by an aluminum skin. In 1928, Japan's first robot, Gakutensoku, was designed and constructed by biologist Makoto Nishimura.

Modern autonomous robots
In 1941 and 1942, Isaac Asimov formulated the Three Laws of Robotics, and in the process of doing so, coined the word "robotics". In 1948, Norbert Wiener formulated the principles of cybernetics, the basis of practical robotics.

The first electronic autonomous robots with complex behaviour were created by William Grey Walter of the Burden Neurological Institute at Bristol, England in 1948 and 1949. He wanted to prove that rich connections between a small number of brain cells could give rise to very complex behaviors - essentially that the secret of how the brain worked lay in how it was wired up. His first robots, named Elmer and Elsie, were constructed between 1948 and 1949 and were often described as tortoises due to their shape and slow rate of movement. The three-wheeled tortoise robots were capable of phototaxis, by which they could find their way to a recharging station when they ran low on battery power.

Walter stressed the importance of using purely analogue electronics to simulate brain processes at a time when his contemporaries such as Alan Turing and John von Neumann were all turning towards a view of mental processes in terms of digital computation. His work inspired subsequent generations of robotics researchers such as Rodney Brooks, Hans Moravec and Mark Tilden. Modern incarnations of Walter's turtles may be found in the form of BEAM robotics.

The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'" The term 'Artificial Intelligence' was created at a conference held at Dartmouth College in 1956.  Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957. In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with $50,000. John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research.


U.S. Patent 2,988,237, issued in 1961 to Devol.
The first digitally operated and programmable robot was invented by George Devol in 1954 and was ultimately called the Unimate. This ultimately laid the foundations of the modern robotics industry. Devol sold the first Unimate to General Motors in 1960, and it was installed in 1961 in a plant in Trenton, New Jersey to lift hot pieces of metal from a die casting machine and stack them. Devol’s patent for the first digitally operated programmable robotic arm represents the foundation of the modern robotics industry.

The Rancho Arm was developed as a robotic arm to help handicapped patients at the Rancho Los Amigos Hospital in Downey, California; this computer controlled arm was bought by Stanford University in 1963. IBM announced its IBM System/360 in 1964. The system was heralded as being more powerful, faster, and more capable than its predecessors.

The film 2001: A Space Odyssey was released in 1968; the movie prominently features HAL 9000, a malevolent artificial intelligence unit which controls a spacecraft. Marvin Minsky created the Tentacle Arm in 1968; the arm was computer controlled and its 12 joints were powered by hydraulics. Mechanical Engineering student Victor Scheinman created the Stanford Arm in 1969; the Stanford Arm is recognized as the first electronic computer controlled robotic arm (Unimate's instructions were stored on a magnetic drum). The first mobile robot capable of reasoning about its surroundings, Shakey was built in 1970 by the Stanford Research Institute (now SRI International). Shakey combined multiple sensor inputs, including TV cameras, laser rangefinders, and "bump sensors" to navigate. In the winter of 1970, the Soviet Union explored the surface of the moon with the lunar vehicle Lunokhod 1, the first roving remote-controlled robot to land on another world.

1970s

The Freddy II Robot, built in 1973-6.
Artificial intelligence critic Hubert Dreyfuss published his influential book What Computers Cannot Do in 1972. Freddy and Freddy II, both built in the United Kingdom, were robots capable of assembling wooden blocks in a period of several hours. German based company KUKA built the world's first industrial robot with six electromechanically driven axes, known as FAMULUS. In 1974, David Silver designed The Silver Arm; the Silver Arm was capable of fine movements replicating human hands. Feedback was provided by touch and pressure sensors and analyzed by a computer. Marvin Minsky published his landmark paper "A Framework for Representing Knowledge" on artificial intelligence.

Joseph Weizenbaum (creator of ELIZA, a program capable of simulating a Rogerian psychotherapist) published Computer Power and Human Reason, presenting an argument against the creation of artificial intelligence. The SCARA, Selective Compliance Assembly Robot Arm, was created in 1978 as an efficient, 4-axis robotic arm. Best used for picking up parts and placing them in another location, the SCARA was introduced to assembly lines in 1981.XCON, an expert system designed to customize orders for industrial use, was released in 1979. The Stanford Cart successfully crossed a room full of chairs in 1979. The Stanford Cart relied primarily on stereo vision to navigate and determine distances. The Robotics Institute at Carnegie Mellon University was founded in 1979 by Raj Reddy. 

1980s

KUKA IR 160/60 Robots from 1983
Takeo Kanade created the first "direct drive arm" in 1981. The first of its kind, the arm's motors were contained within the robot itself, eliminating long transmissions. Cyc, a project to create a database of common sense for artificial intelligence, was started in 1984 by Douglas Leant. The program attempts to deal with ambiguity in language, and is still underway. The first program to publish a book, the expert system Racter, programmed by William Chamberlain and Thomas Etter, wrote the book "The Policeman's Beard is Half-Constructed" in 1983. It is now thought that a system of complex templates were used.

In 1984 Wabot-2 was revealed; capable of playing the organ, Wabot-2 had 10 fingers and two feet. Wabot-2 was able to read a score of music and accompany a person. Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for the Deep Blue.

In 1986, Honda began its humanoid research and development program to create robots capable of interacting successfully with humans. A hexapodal robot named Genghis was revealed by MIT in 1989. Genghis was famous for being made quickly and cheaply due to construction methods; Genghis used 4 microprocessors, 22 sensors, and 12 servo motors. Rodney Brooks and Anita M. Flynn published "Fast, Cheap, and Out of Control: A Robot Invasion of The Solar System". The paper advocated creating smaller cheaper robots in greater numbers to increase production time and decrease the difficulty of launching robots into space.

1990s
The bio-mimetic robot RoboTuna was built by doctoral student David Barrett at the Massachusetts Institute of Technology in 1996 to study how fish swim in water. RoboTuna is designed to swim and resemble a blue fin tuna. Invented by Dr. John Adler, in 1994, the Cyberknife (a stereotactic radiosurgery performing robot) offered an alternative treatment of tumors with a comparable accuracy to surgery performed by human doctors.


IBM's Deep Blue computer, defeated World Chess Champion Garry Kasparov in 1997.
Honda's P2 humanoid robot was first shown in 1996. Standing for "Prototype Model 2", P2 was an integral part of Honda's humanoid development project; over 6 feet tall, P2 was smaller than its predecessors and appeared to be more human-like in its motions. Expected to only operate for seven days, the Sojourner rover finally shuts down after 83 days of operation in 1997. This small robot (only weighing 23 lbs) performed semi-autonomous operations on the surface of Mars as part of the Mars Pathfinder mission; equipped with an obstacle avoidance program, Sojourner was capable of planning and navigating routes to study the surface of the planet. Sojourner's ability to navigate with little data about its environment and nearby surroundings allowed the robot to react to unplanned events and objects. Also in 1997, IBM's chess playing program Deep Blue beat the then current World Chess Champion Garry Kasparov playing at the "Grandmaster" level. The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.

The P3 humanoid robot was revealed by Honda in 1998 as a part of the company's continuing humanoid project. In 1999, Sony introduced the AIBO, a robotic dog capable of interacting with humans, the first models released in Japan sold out in 20 minutes. Honda revealed the most advanced result of their humanoid project in 2000, named ASIMO. ASIMO is capable of running, walking, communication with humans, facial and environmental recognition, voice and posture recognition, and interacting with its environment. Sony also revealed its Sony Dream Robots, small humanoid robots in development for entertainment. In October 2000, the United Nations estimated that there were 742,500 industrial robots in the world, with more than half of the robots being used in Japan.


Roomba vacuum cleaner docked in base station.
In April 2001, the Canadarm2 was launched an orbit and attached to the International Space Station. The Canadarm2 is a larger, more capable version of the arm used by the Space Shuttle and is hailed as being "smarter." Also in April, the Unmanned Aerial Vehicle Global Hawk made the first autonomous non-stop flight over the Pacific Ocean from Edwards Air Force Base in California to RAAF Base Edinburgh in Southern Australia. The flight was made in 22 hours. The popular Roomba, a robotic vacuum cleaner, was first released in 2002 by the company iRobot.
In 2004, Cornell University revealed a robot capable of self-replication; a set of cubes capable of attaching and detaching, the first robot capable of building copies of itself. On 3 and 24 January the Mars rovers Spirit and Opportunity land on the surface of Mars. Launched in 2003, the two robots will drive many times the distance originally expected, and Opportunity is still operating as of mid 2012.

Self-driving cars had made their appearance by the middle of the first decade of the 21st century, but there was room for improvement. All 15 teams competing in the 2004 DARPA Grand Challenge failed to complete the course, with no robot successfully navigating more than five percent of the 150 mile off road course, leaving the $1 million prize unclaimed. In 2005, Honda revealed a new version of its ASIMO robot, updated with new behaviors and capabilities. In 2006, Cornell University revealed its "Starfish" robot, a 4-legged robot capable of self modeling and learning to walk after having been damaged.In 2007, TOMY launched the entertainment robot, i-sobot, which is a humanoid bipedal robot that can walk like a human beings and performs kicks and punches and also some entertaining tricks and special actions under "Special Action Mode".

Robonaut 2, the latest generation of the astronaut helpers, launched to the space station aboard Space Shuttle Discovery on the STS-133 mission. It is the first humanoid robot in space, and although its primary job for now is teaching engineers how dexterous robots behave in space, the hope is that through upgrades and advancements, it could one day venture outside the station to help spacewalkers make repairs or additions to the station or perform scientific work.

Commercial and industrial robots are now in widespread use performing jobs more cheaply or with greater accuracy and reliability than humans. They are also employed for jobs which are too dirty, dangerous or dull to be suitable for humans. Robots are widely used in manufacturing, assembly and packing, transport, earth and space exploration, surgery, weaponry, laboratory research, and mass production of consumer and industrial goods.
With recent advances in computer hardware and data management software, artificial representations of humans are also becoming widely spread. Examples include OpenMRS and EMRBots.



















Tuesday, December 29, 2015

Spaceship Earth

                                  Spaceship Earth

Spaceship Earth is a world view term usually expressing concern over the use of limited resources available on Earth and encouraging everyone on it to act as a harmonious crew working toward the greater good.

The earliest known use is a passage in Henry George's best known work, Progress and Poverty (1879). From book IV, chapter 2:
It is a well-provisioned ship, this on which we sail through space. If the bread and beef above decks seem to grow scarce, we but open a hatch and there is a new supply, of which before we never dreamed. And very great command over the services of others comes to those who as the hatches are opened are permitted to say, "This is mine!"
George Orwell later paraphrases Henry George in The Road to Wigan Pier:
The world is a raft sailing through space with, potentially, plenty of provisions for everybody; the idea that we must all cooperate and see to it that everyone does his fair share of the work and gets his fair share of the provisions seems so blatantly obvious that one would say that no one could possibly fail to accept it unless he had some corrupt motive for clinging to the present system.
In 1965 Adlai Stevenson made a famous speech to the UN in which he said:
We travel together, passengers on a little space ship, dependent on its vulnerable reserves of air and soil; all committed for our safety to its security and peace; preserved from annihilation only by the care, the work, and, I will say, the love we give our fragile craft. We cannot maintain it half fortunate, half miserable, half confident, half despairing, half slave—to the ancient enemies of man—half free in a liberation of resources undreamed of until this day. No craft, no crew can travel safely with such vast contradictions. On their resolution depends the survival of us all.
The following year, Spaceship Earth became the title of a book by a friend of Stevenson's, the internationally influential economist Barbara Ward.
Also in 1966 Kenneth E. Boulding used the phrase in the title of an essay, The Economics of the Coming Spaceship Earth. Boulding described the past open economy of apparently illimitable resources, which he said he was tempted to call the "cowboy economy", and continued: "The closed economy of the future might similarly be called the 'spaceman' economy, in which the earth has become a single spaceship, without unlimited reservoirs of anything, either for extraction or for pollution, and in which, therefore, man must find his place in a cyclical ecological system". (David Korten would take up the "cowboys in a spaceship" theme in his 1995 book When Corporations Rule the World.)
The phrase was also popularized by Buckminster Fuller, who published a book in 1968 under the title of Operating Manual for Spaceship Earth. This quotation, referring to fossil fuels, reflects his approach:
"...we can make all of humanity successful through science's world-engulfing industrial evolution provided that we are not so foolish as to continue to exhaust in a split second of astronomical history the orderly energy savings of billions of years' energy conservation aboard our Spaceship Earth. These energy savings have been put into our Spaceship's life-regeneration-guaranteeing bank account for use only in self-starter functions."
United Nations Secretary-General U Thant spoke of Spaceship Earth on Earth Day March 21, 1971 at the ceremony of the ringing of the Japanese Peace Bell: "May there only be peaceful and cheerful Earth Days to come for our beautiful Spaceship Earth as it continues to spin and circle in frigid space with its warm and fragile cargo of animate life."
                                                                                    Epcot's Spaceship Earth
Spaceship Earth is the name given to the 165 ft geodesic sphere that greets visitors at the entrance of Walt Disney World's Epcot theme park. Housed within the sphere is a dark ride that serves to explore the history of communications and promote Epcot's founding principles, " belief and pride in man's ability to shape a world that offers hope to people everywhere." A previous incarnation of the ride, narrated by actor Jeremy Irons and revised in 2008, was explicit in its message:
"Like a grand and miraculous spaceship, our planet has sailed through the universe of time, and for a brief moment, we have been among its many passengers... We now have the ability and the responsibility to build new bridges of acceptance and co-operation between us, to create a better world for ourselves and our children as we continue our amazing journey aboard Spaceship Earth."
David Deutsch has pointed out that the picture of Earth as a friendly "spaceship" habitat is difficult to defend even in metaphorical sense. The Earth environment is harsh and survival is constant struggle for life, including whole species extinction. Humans wouldn't be able to live in most of the areas where they are living now without knowledge necessary to build life-support systems such as houses, heating, water supply etc.


A second series of The Spaceship began broadcasting on 25 February 2008, with the first series repeated again in the week prior to broadcast. In this series the Really Invincible was upgraded to version 3.2.8.
In the year 2104 a fleet of research cruisers were launched into space. Their mission: to seek out new life. With every moment on board preserved by wall-to-wall monitoring and transmitted over time back to Earth, we’ve been allowed access to one of these ships: The Really Invincible III, Macclesfield Division. What you are about to hear took place, live, four years ago, seventy thousand light years from home.

The Scaled Composites Spaceship Three (SS3) was a mid-2000s proposed space plane to be developed by Virgin Galactic and Scaled Composites, ostensibly to follow Spaceship Two(SS2).
The mission originally proposed for SpaceShipThree in 2005 was for orbital spaceflight, as part of a program called "Tier 2" by Scaled Composites.
By 2008, Scaled Composites had reduced those plans and articulated a conceptual design that would be a point-to-point vehicle traveling outside the atmosphere. As of 2008, the SpaceShipThree concept spacecraft was conceived to be used for transportation through point-to-point suborbital spaceflight with the spacecraft providing, for example, a two-hour trip on the Kangaroo Route (from London to Sydney or Melbourne).
Scaled was sold to Northrop Grumman in 2007, and references to further work on a conceptual Scaled SS3 ended after that time.

Wednesday, December 23, 2015

GPS System

                                       GPS System

The Global Positioning System (GPS) is a space-based navigation system that provides location and time information in all weather conditions, anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. The system provides critical capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, and makes it freely accessible to anyone with a GPS receiver.

The US began the GPS project in 1973 to overcome the limitations of previous navigation systems, integrating ideas from several predecessors, including a number of classified engineering design studies from the 1960s. The U.S. Department of Defense (DoD) developed the system, which originally used 24 satellites. It became fully operational in 1995. Roger L. Easton, Ivan A. Getting and Bradford Parkinson are credited with inventing it.

Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block IIIA satellites and Next Generation Operational Control System (OCX). Announcements from Vice President Al Gore and the White House in 1998 initiated these changes. In 2000, the U.S. Congress authorized the modernization effort, GPS III.


In addition to GPS, other systems are in use or under development. The Russian Global Navigation Satellite System (GLONASS) was developed contemporaneously with GPS, but suffered from incomplete coverage of the globe until the mid-2000s.There are also the planned European Union Galileo positioning system, India's Indian Regional Navigation Satellite System, China's BeiDou Navigation Satellite System, and the Japanese Quasi-Zenith Satellite System.


The GPS concept is based on time. The satellites carry very stable atomic clocks that are synchronized to each other and to ground clocks. Any drift from true time maintained on the ground is corrected daily. Likewise, the satellite locations are monitored precisely. GPS receivers have clocks as well—however, they are not synchronized with true time, and are less stable. GPS satellites continuously transmit their current time and position. A GPS receiver monitors multiple satellites and solves equations to determine the exact position of the receiver and its deviation from true time. At a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and clock deviation from satellite time).

History
The design of GPS is based partly on similar ground-based radio-navigation systems, such as LORAN and the Decca Navigator, developed in the early 1940s and used by the British Royal Navy during World War II.

In 1956, the German-American physicist Friedwardt Winterberg proposed a test of general relativity — detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside artificial satellites. Calculations using general relativity determined that the clocks on the GPS satellites would be seen by the Earth's observers to run 38 microseconds faster per day (than those on the Earth), and this was corrected for in the design of GPS.

The Soviet Union launched the first man-made satellite, Sputnik 1, in 1957. Two American physicists, William Guier and George Weiffenbach, at Johns Hopkins's Applied Physics Laboratory (APL), decided to monitor Sputnik's radio transmissions. Within hours they realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to their UNIVAC to do the heavy calculations required. The next spring, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem — pinpointing the user's location, given that of the satellite. (At the time, the Navy was developing the submarine-launched Polaris missile, which required them to know the submarine's location.) This led them and APL to develop the TRANSIT system.In 1959, ARPA (renamed DARPA in 1972) also played a role in TRANSIT.

Official logo for NAVSTAR GPS

Emblem of the 50th Space Wing
The first satellite navigation system, TRANSIT, used by the United States Navy, was first successfully tested in 1960.It used a constellation of five satellites and could provide a navigational fix approximately once per hour. In 1967, the U.S. Navy developed the Timation satellite that proved the ability to place accurate clocks in space, a technology required by GPS. In the 1970s, the ground-based OMEGA navigation system, based on phase comparison of signal transmission from pairs of stations, became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy.
While there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation for a constellation of navigation satellites. During the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded. It is also the reason for the ultra secrecy at that time. The nuclear triad consisted of the United States Navy's submarine-launched ballistic missiles (SLBMs) along with United States Air Force (USAF) strategic bombers and intercontinental ballistic missiles (ICBMs). Considered vital to the nuclear deterrence posture, accurate determination of the SLBM launch position was a force multiplier.

Precise navigation would enable United States ballistic missile submarines to get an accurate fix of their positions before they launched their SLBMs.The USAF, with two thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The Navy and Air Force were developing their own technologies in parallel to solve what was essentially the same problem. To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (such as Russian SS-24 and SS-25) and so the need to fix the launch position had similarity to the SLBM situation.

In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN. A follow-on study, Project 57, was worked in 1963 and it was "in this study that the GPS concept was born." That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS" and promised increased accuracy for Air Force bombers as well as ICBMs. Updates from the Navy TRANSIT system were too slow for the high speeds of Air Force operation. The Naval Research Laboratory continued advancements with their Timation (Time Navigation) satellites, first launched in 1967, and with the third one in 1974 carrying the first atomic clock into orbit.
Another important predecessor to GPS came from a different branch of the United States military. In 1964, the United States Army orbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying. The SECOR system included three ground-based transmitters from known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969. Decades later, during the early years of GPS, civilian surveying became one of the first fields to make use of the new technology, because surveyors could reap benefits of signals from the less-than-complete GPS constellation years before it was declared operational. GPS can be thought of as an evolution of the SECOR system where the ground-based transmitters have been migrated into orbit.

Development

With these parallel developments in the 1960s, it was realized that a superior system could be developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multi-service program.

During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon discussed the creation of a Defense Navigation Satellite System (DNSS). It was at this meeting that "the real synthesis that became GPS was created." Later that year, the DNSS program was named Navstar, or Navigation System Using Timing and Ranging.With the individual satellites being associated with the name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was used to identify the constellation of Navstar satellites, Navstar-GPS. Ten "Block I" prototype satellites were launched between 1978 and 1985 (with one prototype being destroyed in a launch failure).
After Korean Air Lines Flight 007, a Boeing 747 carrying 269 people, was shot down in 1983 after straying into the USSR's prohibited airspace, in the vicinity of Sakhalin and Moneron Islands, President Ronald Reagan issued a directive making GPS freely available for civilian use, once it was sufficiently developed, as a common good.The first Block II satellite was launched on February 14, 1989, and the 24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the user equipment, but including the costs of the satellite launches, has been estimated at about USD$5 billion (then-year dollars).Roger L. Easton is widely credited as the primary inventor of GPS.
Initially, the highest quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded (Selective Availability). This changed with President Bill Clinton signing a policy directive in 1996 to turn off Selective Availability in May 2000 to provide the same precision to civilians that was afforded to the military. The directive was proposed by the U.S. Secretary of Defense, William Perry, because of the widespread growth of differential GPS services to improve civilian accuracy and eliminate the U.S. military advantage. Moreover, the U.S. military was actively developing technologies to deny GPS service to potential adversaries on a regional basis.
Since its deployment, the U.S. has implemented several improvements to the GPS service including new signals for civil use and increased accuracy and integrity for all users, all the while maintaining compatibility with existing GPS equipment. Modernization of the satellite system has been an ongoing initiative by the U.S. Department of Defense through a series of satellite acquisitions to meet the growing needs of the military, civilians, and the commercial market.
As of early 2015, high-quality, FAA grade, Standard Positioning Service (SPS) GPS receivers provide horizontal accuracy of better than 3.5 meters,although many factors such as receiver quality and atmospheric issues can affect this accuracy.
GPS is owned and operated by the United States Government as a national resource. The Department of Defense is the steward of GPS. Inter agency GPS Executive Board (IGEB) oversaw GPS policy matters from 1996 to 2004. After that the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems.[28] The executive committee is chaired jointly by the deputy secretaries of defense and transportation. Its membership includes equivalent-level officials from the departments of state, commerce, and homeland security, the joint chiefs of staff, and NASA. Components of the executive office of the president participate as observers to the executive committee, and the FCC chairman participates as a liaison.

Summary of satellites
Block Launch
Period Satellite launches Currently in orbit
and healthy Suc-cess Fail-ure In prep-aration Plan-ned
I 1978–1985 10 1 0 0 0
II 1989–1990 9 0 0 0 0
IIA 1990–1997 19 0 0 0 2
IIR 1997–2004 12 1 0 0 12
IIR-M 2005–2009 8 0 0 0 7
IIF From 2010 11 0 1 0 11
IIIA From 2017 0 0 0 12 0
IIIB 0 0 0 8 0
IIIC 0 0 0 16 0
Total 66 2 1 36 3

The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service (as defined in the federal radio navigation plan and the standard positioning service signal specification) that will be available on a continuous, worldwide basis," and "develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses."
Timeline and modernization[edit]
Main article: List of GPS satellites



Monday, November 9, 2015

Network


                   Computer Network  

A computer network or data network is a telecommunications network which allows computers to exchange data. In computer networks, networked computing devices exchange data with each other along network links (data connections). The connections between nodes are established using either cable media or wireless media. The best-known computer network is the Internet.

Network computer devices that originate, route and terminate the data are called network nodes. Nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other.

Computer networks differ in the transmission media used to carry their signals, the communications protocols to organize network traffic, the network's size, topology and organizational intent. In most cases, communications protocols are layered on (i.e. work using) other more specific or more general communications protocols, except for the physical layer that directly deals with the transmission media.

Computer networks support applications such as access to the World Wide Web, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging
Graph Complex network Contagion Small-world Scale-free Community structure Percolation Evolution Controllability Graph drawing Social capital Link analysis Optimization Reciprocity Closure Homophile Transitivity Preferential attachment Balance theory Network effect Social influence
Network types
Informational (computing) Telecommunication Transport Social Biological Artificial neural Interdependent Semantic Spatial Dependency Flow
Features
Clique Component Cut Cycle Data structure Edge Loop Neighborhood Path Vertex Adjacency list / matrix Incidence list / matrix
Types
Bipartite Complete Directed Hyper Multi Random Weighted
Metrics Algorithms
Centrality Degree Between's Closeness Page Rank Motif Clustering Degree distribution        Assortativity Distance Modularity Efficiency
Models
Topology
Random graph ERGM Hyperbolic (HGN) Hierarchical Stochastic block model
Dynamics
Boolean network agent based Epidemic/SIR
Lists Categories
Topics Software Network scientists
Applications.

History

In the late 1950s early networks of computers included the military radar system Semi-Automatic Ground Environment (SAGE).
In 1959 Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres.
In 1960 the commercial airline reservation system semi-automatic business research environment (SABRE) went online with two connected mainframes.
In 1962 J.C.R. Licklider developed a working group he called the "Intergalactic Computer Network", a precursor to the ARPANET, at the Advanced Research Projects Agency (ARPA).
In 1964 researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections.
Throughout the 1960s, Leonard Kleinrock, Paul Baran, and Donald Davies independently developed network systems that used packets to transfer information between computers over a network.
In 1965, Thomas Marill and Lawrence G. Roberts created the first wide area network (WAN). This was an immediate precursor to the ARPANET, of which Roberts became program manager.
Also in 1965, Western Electric introduced the first widely used telephone switch that implemented true computer control.
In 1969 the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah became connected as the beginning of the ARPANET network using 50 Kbit/s circuits.
In 1972 commercial services using X.25 were deployed, and later used as an underlying infrastructure for expanding TCP/IP networks.
In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system that was based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978. In 1979 Robert Metcalfe pursued making Ethernet an open standard.
In 1976 John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices.
In 1995 the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. The ability of Ethernet to scale easily (such as quickly adapting to support new fiber optic cable speeds) is a contributing factor to its continued use as of 2015.
Properties
Computer networking may be considered a branch of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines.

A computer network facilitates interpersonal communications allowing users to communicate efficiently and easily via various means: email, instant messaging, chat rooms, telephone, video telephone calls, and video conferencing. Providing access to information on shared storage devices is an important feature of many networks. A network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network. A network allows sharing of network and computing resources. Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer. Distributed computing uses computing resources across a network to accomplish tasks. A computer network may be used by computer crackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial of service attack.

Network packet

Computer communication links that do not support packets, such as traditional point-to-point telecommunication links, simply transmit data as a bit stream. However, most information in computer networks is carried in packets. A network packet is a formatted unit of data (a list of bits or bytes, usually a few tens of bytes to a few kilobytes long) carried by a packet-switched network.

In packet networks, the data is formatted into packets that are sent through the network to their destination. Once the packets arrive they are reassembled into their original message. With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from others users, and so the cost can be shared, with relatively little interference, provided the link isn't overused.

Packets consist of two kinds of data: control information, and user data (payload). The control information provides data the network needs to deliver the user data, for example: source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.

Often the route a packet needs to take through a network is not immediately available. In that case the packet is queued and waits until a link is free.

Network Topology    
  
The physical layout of a network is usually less important than the topology that connects network nodes. Most diagrams that describe a physical network are therefore topological, rather than geographic. The symbols on these diagrams usually denote network links and network nodes.

Network links

The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cable (Ethernet, HomePNA, power line communication, G.hn), optical fiber (fiber-optic communication), and radio waves (wireless networking). In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer.

A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards (e.g. those defined by IEEE 802.11) use radio waves, or others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.

Wired Technologies

Fiber optic cables are used to transmit light from one computer/network node to another
The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed.

Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation helps minimize interference and distortion. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network
Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.

2007 map showing submarine optical fiber telecommunication cables around the world.
An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea cables to interconnect continents.
Price is a main factor distinguishing wired- and wireless-technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.

Wireless Technologies

Terrestrial microwave  Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low-gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 48 km (30 mi) apart.
Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi. Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.

Exotic technologies

There have been various attempts at transporting data over exotic media:

IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001.
Both cases have a large round-trip delay time, which gives slow two-way communication, but doesn't prevent sending large amounts of information.

Network nodes


Apart from any physical transmission medium there may be, networks comprise additional basic system building blocks, such as network interface controller (NICs), repeaters, hubs, bridges, switches, routers, modems, and firewalls.

Network interfaces

An ATM network interface in the form of an accessory card. A lot of network interfaces are built-in.
A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry. The NIC responds to traffic addressed to a network address for either the NIC or the computer as a whole.

In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) addressusually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.

Repeaters and hubs

A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.

A repeater with multiple ports is known as a hub. Repeaters work on the physical layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule.

Hubs have been mostly obsolete by modern switches; but repeaters are used for long distance links, notably undersea cabling.

Bridges

A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks.

Bridges come in three basic types:

Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
Wireless bridges: Can be used to join LANs or connect remote devices to LANs.
Switches
A network switch is a device that forwards and filters OSI layer 2 datagram's (frames) between ports based on the MAC addresses in the frames. A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge. It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches.

Multi-layer switches are capable of routing based on layer 3 addressing or additional logical levels. The term switch is often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a Web URL identifier).

Routers

A typical home or small office router showing the ADSL telephone line and Ethernet network cable connections
A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. (A destination in a routing table can include a "null" interface, also known as the "black hole" interface because data can go into it, however, no further processing is done for said data.)

Modems

Modems (Modulator Demodulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a Digital Subscriber Line technology.

Firewalls

A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.

Network structure

Network topology is the layout or organizational hierarchy of interconnected nodes of a computer network. Different network topologies can affect throughput, but reliability is often more critical. With many technologies, such as bus networks, a single failure can cause the network to fail entirely. In general the more interconnections there are, the more robust the network is; but the more expensive it is to install.

Common layouts

1. A bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2.

2. A star network: all nodes are connected to a special central node. This is the typical layout found in a Wireless LAN, where each wireless client connects to the central Wireless access point.

3. A ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. The Fiber Distributed Data Interface (FDDI) made use of such a topology.

4. A mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other.
5. A fully connected network: each node is connected to every other node in the network.

6. A tree network: nodes are arranged hierarchically.
Note that the physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring (actually two counter-rotating rings), but the physical topology is often a star, because all neighboring connections can be routed via a central physical location.

Overlay network

An overlay network is a virtual computer network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.

Overlay networks have been around since the invention of networking when computer systems were connected over telephone lines using modems, before any data network existed.

The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.

Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys.

Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees to achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP Multicast have not seen wide acceptance largely because they require modification of all routers in the network.[citation needed] On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination.

For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others.

Communications protocols

The TCP/IP model or Internet layering scheme and its relation to common protocols often layered on top of it.
A communications protocol is a set of rules for exchanging information over network links. In a protocol stack (also see the OSI model), each protocol leverages the services of the protocol below it. An important example of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web.

Whilst the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers for two principal reasons. Firstly, abstracting the protocol stack in this way may cause a higher layer to duplicate functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis. Secondly, it is common that a protocol implementation at one layer may require data, state or addressing information that is only present at another layer, thus defeating the point of separating the layers in the first place. For example, TCP uses the ECN field in the IPv4 header as an indication of congestion; IP is a network layer protocol whereas TCP is a transport layer protocol.

Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing.

There are many communication protocols, a few of which are described below.

IEEE 802
The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at levels 1 and 2 of the OSI model.

For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based Network Access Control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key".

Ethernet

Ethernet, sometimes simply called LAN, is a family of protocols used in wired LANs, described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers.

Wireless LAN

Wireless LAN, also widely known as WLAN or Wi-Fi, is probably the most well-known member of the IEEE 802 protocol family for home users today. It is standardized by IEEE 802.11 and shares many properties with wired Ethernet.

Internet Protocol Suite

The Internet Protocol Suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less as well as connection-oriented services over an inherently unreliable network traversed by data-gram transmission at the Internet protocol (IP) level. At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability.

SONET/SDH

Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support real-time, uncompressed, circuit-switched voice encoded in PCM (Pulse-Code Modulation) format. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.

Asynchronous Transfer Model

Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet Protocol Suite or Ethernet that use variable sized packets or frames. ATM has similarity with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.

While the role of ATM is diminishing in favor of next-generation networks, it still plays a role in the last mile, which is the connection between an Internet service provider and the home user.

Geographic scale
A network can be characterized by its physical capacity or its organizational purpose. Use of the network, including user authorization and access rights, differ accordingly.

Nanoscale Network

A nanoscale communication network has key components implemented at the nanoscale including message carriers and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for classical communication.

Personal Area Network (PAN)

A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.

Home Area Network (HAN)

A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable TV or digital subscriber line (DSL) provider.

Storage Area Network (SAN)

A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.

Campus Area Network (CAN)

A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling, etc.) are almost entirely owned by the campus tenant / owner (an enterprise, university, government, etc.).

For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls.

Backbone Network

A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or sub-networks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area.

For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it.

Another example of a backbone network is the Internet backbone, which is the set of wide area networks (WANs) and core routers that tie together all networks connected to the Internet.

Local Area Network (LAN)

A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Wired LANs are most likely based on Ethernet technology. Newer standards such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines.

All interconnected devices use the network layer (layer 3) to handle multiple subnets (represented by different colors). Those inside the library have 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router. They could be called Layer 3 switches, because they only have Ethernet interfaces and support the Internet Protocol. It might be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and to the academic networks' customer access routers.

The defining characteristics of a LAN, in contrast to a wide area network (WAN), include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity. Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to 10 Gbit/s. The IEEE investigates the standardization of 40 and 100 Gbit/s rates. A LAN can be connected to a WAN using a router.


Metropolitan Area Network (MAN)

A Metropolitan area network (MAN) is a large computer network that usually spans a city or a large campus.

Wide Area Network (WAN)

A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.

Enterprise Private Network (EPN)

An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources.

Virtual Private Network (VPN)

A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.

VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.

Global Area Network (GAN)

A global area network (GAN) is a network used for supporting mobile across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.[20]

Organizational scope

Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.

Intranets

An intranet is a set of networks that are under the control of a single administrative entity. The intranet uses the IP protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information. An intranet is also anything behind the router on a local area network.

Extranet

An extranet is a network that is also under the administrative control of a single organization, but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. Network connection to an extranet is often, but not always, implemented via WAN technology.

Internetwork

An internetwork is the connection of multiple computer networks via a common routing technology using routers.

Internet

Partial map of the Internet based on the January 15, 2005 data found on opte.org. Each line is drawn between two nodes, representing two IP addresses. The lengths of the lines are indicative of the delay between those two nodes. This graph represents less than 30% of the Class C networks reachable.
The Internet is the largest example of an internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW).

Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP addresses) administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reach ability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

Darknet
A Darknet is an overlay network, typically running on the internet, that is only accessible through specialized software. A darknet is an anatomizing network where connections are made only between trusted peers  sometimes called "friends" (F2F)  using non-standard protocols and ports.

Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference.

Routing

Routing calculates good paths through a network for information to take. For example, from node 1 to node 6 the best routes are likely to be 1-8-7-6 or 1-8-10-6, as this has the thickest routes.
Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks.

In packet switched networks, routing directs packet forwarding (the transit of logically addressed network packets from their source toward their ultimate destination) through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Thus, constructing routing tables, which are held in the router's memory, is very important for efficient routing. Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths.

There are usually multiple routes that can be taken, and to choose between them, different elements can be considered to decide which routes get installed into the routing table, such as (sorted by priority):

Prefix-Length: where longer subnet masks are preferred (independent if it is within a routing protocol or over different routing protocol)
Metric: where a lower metric/cost is preferred (only valid within one and the same routing protocol)
Administrative distance: where a lower distance is preferred (only valid between different routing protocols)
Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging). Routing has become the dominant form of addressing on the Internet. Bridging is still widely used within localized environments.

Network service   

Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate.

The World Wide Web, E-mail, printing and network file sharing are examples of well-known network services. Network services such as DNS (Domain Name System) give names for IP and MAC addresses (people remember names like “nm.lan” better than numbers like “210.121.67.18”), and DHCP to ensure that the equipment on the network has a valid IP address.

Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.

Network performance

Depending on the installation requirements, network performance is usually measured by the quality of service of a telecommunications product. The parameters that affect this typically can include throughput, jitter, bit error rate and latency.

The following list gives examples of network performance measures for a circuit-switched network and one type of packet-switched network, viz. ATM:

Circuit-switched networks: In circuit switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads. Other types of performance measures can include the level of noise and echo.
ATM: In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique and modem enhancements.
There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed.

Network congestion

Network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queuing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either only to small increase in network throughput, or to an actual reduction in network throughput.

Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.

Modern networks use congestion control and congestion avoidance techniques to try to avoid congestion collapse. These include: exponential back off in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queuing in devices such as routers. Another method to avoid the negative effects of network congestion is implementing priority schemes, so that some packets are transmitted with higher priority than others. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for some services. An example of this is 802.1p. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn standard, which provides high-speed (up to 1 Gbit/s) Local area networking over existing home wires (power lines, phone lines and coaxial cables).


Network resilience

Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation.”

Security

Network security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. Network security is the authorization of access to data in a network, which is controlled by the network administrator. Users are assigned an ID and password that allows them access to information and programs within their authority. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies and individuals.

Network surveillance

Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.

Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.

Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent/investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high speed surveillance computers and biometrics software, and laws such as the Communications Assistance for Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.

However, many civil rights and privacy groups such as Reporters without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to numerous lawsuits such as Hefting v. AT&T. The activist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".

End to end encryption

End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet providers or application service providers, from discovering or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity.

Examples of end-to-end encryption include PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio.

Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox. Some such systems, for example LavaBit and SecretInk, have even described themselves as offering "end-to-end" encryption when they do not. Some systems that normally offer end-to-end encryption have turned out to contain a back door that subverts negotiation of the encryption key between the communicating parties, for example Skype or Hushmail.

The end-to-end encryption paradigm does not directly address risks at the communications endpoints themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the end points and the times and quantities of messages that are sent.

Views of networks

Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a Network Administrator is responsible to keep that network up and running. A community of interest has less of a connection of being in a local area, and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies.

Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology.