Pages

Slide 1 title

Click to Next

Slide 2

Click to Next

Slide 3

Click to Next

Slide 4

Click to Next

Slide 5

Click to Next

Saturday, October 31, 2015

Rocket

                                        Rocket

                                                 The word "rocket" can mean different things. Most people think of a tall, thin, round vehicle. They think of a rocket that launches into space. "Rocket" can mean a type of engine. The word also can mean a vehicle that uses that engine. A rocket (Italian rocchetta‚ "little fuse" is a missile, spacecraft, aircraft or other vehicle that obtains thrust from a rocket engine. Rocket engine exhaust is formed entirely from propellant carried within the rocket before use. Rocket engines work by action and reaction and push rockets forward simply by expelling their exhaust in the opposite direction at high speed, and can therefore work in space.
Indeed, rockets work more efficiently in space than in an atmosphere. Multi-stage rockets are capable of attaining escape velocity from Earth and therefore can achieve unlimited maximum altitude. Compared with air-breathing engines, rockets are lightweight and powerful and capable of generating large accelerations. Rockets rely on momentum, airfoils, auxiliary reaction engines, gimballed thrust, momentum wheels, deflection of the exhaust stream, propellant flow, spin, and/or gravity to help control flight.
Rockets for military and recreational uses date back to at least 13th century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology for the Space Age, including setting foot on the moon. Rockets are now used for fireworks, weaponry, ejection seats, and launch vehicles for artificial satellites, human spaceflight, and space exploration. Chemical rockets are the most common type of high power rocket, typically creating a high speed exhaust by the combustion of fuel with an oxidizer. The stored propellant can be a simple pressurized gas or a single liquid fuel that disassociates in the presence of a catalyst (monopropellants), two liquids that spontaneously react on contact (hypergolic propellants), two liquids that must be ignited to react, a solid combination of one or more fuels with one or more oxidizers (solid fuel), or solid fuel with liquid oxidant (hybrid propellant system). Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimize risks.

History of Rockets

The availability of black powder (gunpowder) to propel projectiles was a precursor to experiments as weapons such as bombs, cannon, incendiary fire arrows and rocket-propelled fire arrows. The discovery of gunpowder was probably the product of centuries of alchemical experimentation in which Taoist alchemists were trying to create an elixir of immortality that would allow the person ingesting it to become physically immortal. However, anyone with a wood fire might have observed the acceleration of combustion that accidentally-chosen saltpetre-containing rocks would have produced.

Exactly when the first flights of rockets occurred is contested. Merely lighting a centimeter-sized solid lump of gunpowder on one side can cause it to move via reaction (even without a nozzle for efficiency), so confinement in a tube and other design refinements may easily have followed for the experimentally-minded with ready access to saltpetre.

A problem for dating the first rocket flight is that Chinese fire arrows can be either arrows with explosives attached, or arrows propelled by gunpowder. There were reports of fire arrows and 'iron pots' that could be heard for 5 leagues (25 km, or 15 miles) when they exploded, causing devastation for a radius of 600 meters (2,000 feet), apparently due to shrapnel.[8] A common claim is that the first recorded use of a rocket in battle was by the Chinese in 1232 against the Mongol hordes at Kai Feng Fu.[9] However, the lowering of iron pots there may have been a way for a besieged army to blow up invaders. A scholarly reference occurs in the Ko Chieh Ching Yuan (The Mirror of Research), states that in 998 AD a man named Tang Fu invented a fire arrow of a new kind having an iron head.

Less controversially, one of the earliest devices recorded that used internal-combustion rocket propulsion, was the 'ground-rat,' a type of firework recorded in 1264 as having frightened the Empress-Mother Kung Sheng at a feast held in her honor by her son the Emperor Lizong. Subsequently, one of the earliest texts to mention the use of rockets was the Huolongjing, written by the Chinese artillery officer Jiao Yu in the mid-14th century. This text also mentioned the use of the first known multistage rocket, the 'fire-dragon issuing from the water' (huo long chu shui), used mostly by the Chinese navy.

Uses of Rocket
1. Military
2. Science and research
3. Spaceflight
4. Rescue
5. Hobby, sport, and entertainment



Answers and questions about Rocket

 What is Rocket?
 The word "rocket" can mean different things. Most people think of a tall, thin, round vehicle. They think of a rocket that launches into space. "Rocket" can mean a type of engine. The word also can mean a vehicle that uses that engine.
How Does a Rocket Engine Work?
Like most engines, rockets burn fuel. Most rocket engines turn the fuel into hot gas. The engine pushes the gas out its back. The gas makes the rocket move forward. A rocket is different from a jet engine. A jet engine needs air to work. A rocket engine doesn't need air. It carries with it everything it needs. A rocket engine works in space, where there is no air. There are two main types of rocket engines. Some rockets use liquid fuel. The main engines on the space shuttle orbiter use liquid fuel. The Russian Soyuz uses liquid fuels. Other rockets use solid fuels. On the side of the space shuttle are two white solid rocket boosters. They use solid fuels. Fireworks and model rockets also fly using solid fuels.

Why Does a Rocket Work?
In space, an engine has nothing to push against. So how do rockets move there? Rockets work by a scientific rule called Newton's third law of motion. English scientist Sir Isaac Newton listed three Laws of Motion. He did these more than 300 years ago. His third law says that for every action, there is an equal and opposite reaction. The rocket pushes on its exhaust. The exhaust pushes the rocket, too. The rocket pushes the exhaust backward. The exhaust makes the rocket move forward.
This rule can be seen on Earth. Imagine a person standing on a skateboard. Imagine that person throwing a bowling ball. The ball will go forward. The person on the skateboard will move, too. The person will move backward. Because the person is heavier, the bowling ball will move farther.

When Were Rockets Invented?
The first rockets we know about were used in China in the 1200s. These solid rockets were used for fireworks. Armies also used them in wars. In the next 700 years, people made bigger and better solid rockets. Many of these were used for wars too. In 1969, the United States launched the first men to land on the moon using a Saturn V rocket.

How Does NASA Use Rockets?
Early NASA missions used rockets built by the military. Alan Sheppard was the first American in space. He flew on the U.S. Army's Redstone rocket. John Glenn was the first American in orbit. He flew on an Atlas rocket. NASA's Gemini missions used the Titan II rocket. The first rockets NASA built to launch astronauts were the Saturn I, the Saturn IB and the Saturn V. These rockets were used for the Apollo missions. The Apollo missions sent men to the moon. A Saturn V also launched the Skylab space station. The space shuttle uses rocket engines.

NASA uses rockets to launch satellites. It also uses rockets to send probes to other worlds. These rockets include the Atlas V, the Delta II, the Pegasus and Taurus. NASA uses smaller "sounding rockets" for scientific research. These rockets go up and come back down. They do not fly into orbit.

How Will NASA Use Rockets in the Future?
The new rockets will not look like the space shuttle. These rockets will look more like earlier ones. They will be tall and round and thin. These rockets will take astronauts into space. They will take supplies to the International Space Station. NASA also is working on a powerful new rocket called a heavy lift vehicle. This rocket will be able to take big loads into space.



What Is the International Space Station?
The International Space Station is a large spacecraft. It orbits around Earth. It is a home where astronauts live.

The space station is also a science lab. Many countries worked together to build it. They also work together to use it.

The space station is made of many pieces. The pieces were put together in space by astronauts. The space station's orbit is about 220 miles above Earth. NASA uses the station to learn about living and working in space. These lessons will help NASA explore space.


How Old Is the Space Station?
The first piece of the International Space Station was launched in 1998. A Russian rocket launched that piece. After that, more pieces were added. Two years later, the station was ready for people. The first crew arrived on November 2, 2000. People have lived on the space station ever since. Over time more pieces have been added. NASA and its partners around the world finished the space station in 2011.

How Big Is the Space Station?
The space station is as big inside as a house with five bedrooms. It has two bathrooms, a gymnasium and a big bay window. Six people are able to live there. It weighs almost a million pounds. It is big enough to cover a football field including the end zones. It has science labs from the United States, Russia, Japan and Europe.

What Are the Parts of the Space Station?
The space station has many parts. The parts are called modules. The first modules had parts needed to make the space station work. Astronauts also lived in those modules. Modules called "nodes" connect parts of the station to each other. Labs on the space station let astronauts do research.

On the sides of the space station are solar arrays. These arrays collect energy from the sun. They turn sunlight into electricity. Robot arms are attached outside. The robot arms helped to build the space station. They also can move astronauts around outside and control science experiments. Airlocks on the space station are like doors. Astronauts use them to go outside on spacewalks.
Docking ports are like doors, too. The ports allow visiting spacecraft to connect to the space station. New crews and visitors enter the station through the docking ports. Astronauts fly to the space station on the Russian Soyuz. The crew members use the ports to move supplies onto the station.

Why Is the Space Station Important?
The space station is a home in orbit. People have lived in space every day since the year 2000. The space station's labs are where crew members do research. This research could not be done on Earth. Scientists study what happens to people when they live in space. NASA has learned how to keep a spacecraft working for a long time. These lessons will be important in the future.
NASA has a plan to send humans deeper into space than ever before. The space station is one of the first steps. NASA will use lessons from the space station to get astronauts ready for the journey ahead.


Radio Devices

                                           Radio Devices
             
Radio is the radiation (wireless transmission) of electromagnetic energy through space. The biggest use of radio waves is to carry information, such as sound, by systematically changing (modulating) some property of the radiated waves, such as their amplitude, frequency, phase, or pulse width. When radio waves strike an electrical conductor, the oscillating fields induce an alternating current in the conductor. The information in the waves can be extracted and transformed back into its original form.
Radio systems need a transmitter to modulate (change) some property of the energy produced to impress a signal on it, for example using amplitude modulation, angle modulation (which can be frequency modulation or phase modulation). Radio systems also need an antenna to convert electric currents into radio waves, and vice versa. An antenna can be used for both transmitting and receiving. The electrical resonance of tuned circuits in radios allows individual stations to be selected. The electromagnetic wave is intercepted by a tuned receiving antenna. A radio receiver receives its input from an antenna and converts it into a form usable for the consumer, such as sound, pictures, digital data, measurement values, navigational positions, etc. Radio frequencies occupy the range from a 3 kHz to 300 GHz, although commercially important uses of radio use only a small part of this spectrum.

A radio communication system sends signals by radio. The radio equipment involved in communication systems includes a transmitter and a receiver, each having an antenna and appropriate terminal equipment such as a microphone at the transmitter and a loudspeaker at the receiver in the case of a voice-communication system.

History

In 1864 James Clerk Maxwell showed mathematically that electromagnetic waves could propagate through free space. The effects of electromagnetic waves (then-unexplained "action at a distance" sparking behavior) were actually observed before and after Maxwell's work by many inventors and experimenters including Luigi Galvani (1791), Peter Samuel Munk (1835), Joseph Henry (1842), Samuel Alfred Varley (1852), Edwin Houston, Elihu Thomson, Thomas Edison (1875) and David Edward Hughes (1878).  Edison gave the effect the name "etheric force".
Edison was ridiculed by short-sighted "experts" when he noted radio effects while experimenting with the telegraph. He referred to this as etheric force in an announcement in on November 28, 1875. Elihu Thomson and others ridiculed the idea, and unfortunately Edison listened to them. His explanation was not based on the electromagnetic waves described by Maxwell, but "mutual-inductively coupled or magnetically coupled communication". He did take out U.S. Patent 465,971 ten years later, on a system of electrical wireless communication between ships. Hughes was likewise dismissed by "experts", when he went so far as to develop a working carbon detector for radio waves. In 1880 he demonstrated to the Royal Society that he could transmit and detect sparks up to 500 yards. Unfortunately, the Society spurned this discovery as electromagnetic induction. Thanks in part to this panning of Edison and Hughes by supposedly scientific authorities, the realization of their discoveries was set back almost a generation.
In 1886 Heinrich Rudolf Hertz noticed the same sparking phenomenon and, in published experiments (1887-1888), was able to demonstrate the existence of electromagnetic waves in an experiment confirming Maxwell's theory of electromagnetism. The discovery of these "Hertzian waves" (radio waves) prompted many experiments by physicists. An August 1894 lecture by the British physicist Oliver Lodge, where he transmitted and received "Hertzian waves" at distances up to 50 meters, was followed up a year later with experiments by Indian physicist Jagadish Bose in radio microwave optics and construction of a radio based lightning detector by Russian physicist Alexander Stepanovich Popov.

Starting in late 1894, having purchased Edison's ship-to-ship communication patent in 1891, Guglielmo Marconi began pursuing the idea of building a wireless telegraphy system based on Hertzian waves (radio). Marconi gained a patent on the system in 1896 and developed it into a commercial communication system over the next few years.
Early 20th century radio systems transmitted messages by continuous wave code only. Early attempts at developing a system of amplitude modulation for voice and music were demonstrated in 1900 and 1906, but had little success. World War I accelerated the development of radio for military communications, and in this era the first vacuum tubes were applied to radio transmitters and receivers. Electronic amplification was a key development in changing radio from an experimental practice by experts into a home appliance. After the war, commercial radio broadcasting began in the 1920s and became an important mass medium for entertainment and news.
World War II again accelerated development of radio for the wartime purposes of aircraft and land communication, radio navigation and radar. After the war, the experiments in television that had been interrupted were resumed, and it also became an important home entertainment medium.
Uses of radio
Early uses were maritime, for sending telegraphic messages using Morse code between ships and land. The earliest users included the Japanese Navy scouting the Russian fleet during the Battle of Tsushima in 1905. One of the most memorable uses of marine telegraphy was during the sinking of the RMS Titanic in 1912, including communications between operators on the sinking ship and nearby vessels, and communications to shore stations listing the survivors.

Radio was used to pass on orders and communications between armies and navies on both sides in World War I; Germany used radio communications for diplomatic messages once it discovered that its submarine cables had been tapped by the British. The United States passed on President Woodrow Wilson's Fourteen Points to Germany via radio during the war. Broadcasting began from San Jose, California in 1909,[22] and became feasible in the 1920s, with the widespread introduction of radio receivers, particularly in Europe and the United States. Besides broadcasting, point-to-point broadcasting, including telephone messages and relays of radio programs, became widespread in the 1920s and 1930s. Another use of radio in the pre-war years was the development of detection and locating of aircraft and ships by the use of radar (Radio Detection and Ranging).
Today, radio takes many forms, including wireless networks and mobile communications of all types, as well as radio broadcasting. Before the advent of television, commercial radio broadcasts included not only news and music, but dramas, comedies, variety shows, and many other forms of entertainment (the era from the late 1920s to the mid-1950s is commonly called radio's "Golden Age"). Radio was unique among methods of dramatic presentation in that it used only sound. For more, see radio programming.

Etymology

The term "radio" is derived from the Latin word radius, meaning "spoke of a wheel, beam of light, ray". It was first applied to communications in 1881 when, at the suggestion of French scientist Ernest Mercadier, Alexander Graham Bell adopted "radiophone" (meaning "radiated sound") as an alternate name for his photo phone optical transmission system. However this invention would not be widely adopted. Heinrich Hertz established the existence of electromagnetic radiation in the late 1880s, and initially various terms were used for the phenomenon, with early descriptions of the radiation itself including "Hertzian waves", "electric waves", and "ether waves", while phrases describing its use in communications included "spark telegraphy", "space telegraphy", "aerography" and, eventually and most commonly, "wireless telegraphy". However, "wireless" included a broad variety of related electronic technologies, including electrostatic induction, electromagnetic induction and aquatic and earth conduction, so there was a need for a more precise term referring exclusively to electromagnetic radiation. The first use of radio- in conjunction with electromagnetic radiation appears to have been by French physicist Eduard Branly, who in 1890 developed a version of a coherer receiver which he called a radio-conductor. The radio- prefix was later used to form additional descriptive compound and hyphenated words, especially in Europe, for example, the French text of the Berlin Radiotelegraphic Convention, signed on November 3, 1906, uses the phrases radiotelegraphique and radiotelegrammes. The use of "radio" as a standalone word dates back to at least December 30, 1904, when instructionsn issued by the British Post Office for transmitting telegrams specified that "The word 'Radio'... is sent in the Service Instructions". This practice was universally adopted, and the word "radio" introduced internationally, by the 1906 Berlin Radiotelegraphic Convention, which included a Service Regulation specifying that "Radio telegrams shall show in the preamble that the service is 'Radio'".

The switch to "radio" in place of "wireless" took place slowly and unevenly in the English-speaking world. Lee de Forest helped popularize the new word in the United States nearly 1907 he founded the Deforest Radio Telephone Company, and his letter in the June 22, 1907 Electrical World about the need for legal restrictions warned that "Radio chaos will certainly be the result until such stringent regulation is enforced". The United States Navy would also play a role. Although its translation of the 1906 Berlin Convention used the terms "wireless telegraph" and "wireless telegram", by 1912 it began to promote the use of "radio" instead. The term started to become preferred by the general public in the 1920s with the introduction of broadcasting. ("Broadcasting" is based upon an agricultural term meaning roughly "scattering seeds widely".) British Commonwealth countries continued to commonly use the term "wireless" until the mid-20th century, though the magazine of the British Broadcasting Corporation in the UK has been called Radio Times since its founding in the early 1920s.
In recent years the more general term "wireless" has gained renewed popularity, even for devices using electromagnetic radiation, through the rapid growth of short-range computer networking, e.g., Wireless Local Area Network (WLAN), Wi-Fi, and Bluetooth, as well as mobile telephony, e.g., GSM and UMTS cell phones. Today, the term "radio" specifies the transceiver device or chip, whereas "wireless" refers to the lack of physical connections; thus equipment employs embedded radio transceivers, but operates as wireless devices over wireless sensor networks.

Processes

Transuding information such as sound into an electromagnetic pulse signal. Which is then sent as an electromagnetic radio wave from a transmitter? A receiver intercepts the radio wave and extracts the information-bearing electronic signal, which is converted back using another transducer such as a speaker.
Radio systems used for communication have the following elements. With more than 100 years of development, each process is implemented by a wide range of methods, specialized for different communications purposes.


Transmitter and modulation

Each system contains a transmitter; This consists of a source of electrical energy, producing alternating current of a desired frequency of oscillation. The transmitter contains a system to modulate (change) some property of the energy produced to impress a signal on it. This modulation might be as simple as turning the energy on and off, or altering more subtle properties such as amplitude, frequency, phase, or combinations of these properties. The transmitter sends the modulated electrical energy to a tuned resonant antenna; this structure converts the rapidly changing alternating current into an electromagnetic wave that can move through free space (sometimes with a particular polarization). An audio signal (top) may be carried by an AM or FM radio wave.
Amplitude modulation of a carrier wave works by varying the strength of the transmitted signal in proportion to the information being sent. For example, changes in the signal strength can be used to reflect the sounds to be reproduced by a speaker, or to specify the light intensity of television pixels. It was the method used for the first audio radio transmissions, and remains in use today. "AM" is often used to refer to the medium wave broadcast band (see AM radio), but it is used in various radiotelephone services such as the Citizen Band, amateur radio and especially in aviation, due to its ability to be received under very weak signal conditions and its immunity to capture effect, allowing more than one signal to be heard simultaneously.

Frequency modulation varies the frequency of the carrier. The instantaneous frequency of the carrier is directly proportional to the instantaneous value of the input signal. FM has the "capture effect" whereby a receiver only receives the strongest signal, even when others are present. Digital data can be sent by shifting the carrier's frequency among a set of discrete values, a technique known as frequency-shift keying. FM is commonly used at Very high frequency (VHF) radio frequencies for high-fidelity broadcasts of music and speech (see FM broadcasting). Analog TV sound is also broadcast using FM.

Angle modulation alters the instantaneous phase of the carrier wave to transmit a signal. It may be either FM or phase modulation (PM).
Antenna
An antenna (or aerial) is an electrical device which converts electric currents into radio waves, and vice versa. It is usually used with a radio transmitter or radio receiver. In transmission, a radio transmitter supplies an electric current oscillating at radio frequency (i.e. high frequency AC) to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). In reception, an antenna intercepts some of the power of an electromagnetic wave in order to produce a tiny voltage at its terminals that is applied to a receiver to be amplified. Some antennas can be used for both transmitting and receiving, even simultaneously, depending on the connected equipment.
Propagation
Once generated, electromagnetic waves travel through space either directly, or have their path altered by reflection, refraction or diffraction. The intensity of the waves diminishes due to geometric dispersion (the inverse-square law); some energy may also be absorbed by the intervening medium in some cases. Noise will generally alter the desired signal; this electromagnetic interference comes from natural sources, as well as from artificial sources such as other transmitters and accidental radiators. Noise is also produced at every step due to the inherent properties of the devices used. If the magnitude of the noise is large enough, the desired signal will no longer be discernible; the signal-to-noise ratio is the fundamental limit to the range of radio communications.
Resonance
Electrical resonance of tuned circuits in radios allows individual stations to be selected. A resonant circuit will respond strongly to a particular frequency and much less so to differing frequencies. This allows the radio receiver to discriminate between multiple signals differing in frequency.


Receiver and demodulation

A crystal receiver, consisting of an antenna, adjustable electromagnetic coil, crystal rectifier, capacitor, headphones and ground connection. The electromagnetic wave is intercepted by a tuned receiving antenna; this structure captures some of the energy of the wave and returns it to the form of oscillating electrical currents. At the receiver, these currents are demodulated, which is conversion to a usable signal form by a detector sub-system. The receiver is "tuned" to respond preferentially to the desired signals, and reject undesired signals.
Early radio systems relied entirely on the energy collected by an antenna to produce signals for the operator. Radio became more useful after the invention of electronic devices such as the vacuum tube and later the transistor, which made it possible to amplify weak signals. Today radio systems are used for applications from walkie-talkie children's toys to the control of space vehicles, as well as for broadcasting, and many other applications.
A radio receiver receives its input from an antenna, uses electronic filters to separate a wanted radio signal from all other signals picked up by this antenna, amplifies it to a level suitable for further processing, and finally converts through demodulation and decoding the signal into a form usable for the consumer, such as sound, pictures, digital data, measurement values, navigational positions, etc.
Radio band
Light comparison
Name Frequency (Hz) (Wavelength) Photon energy (eV)
1. Gamma ray > 30 EHz (0.01 nm) 124 keV - 300+ GeV
2. X-Ray

30 EHz - 30 PHz (0.01 nm - 10 nm)
124 eV to 120 keV
4. Ultraviolet

30 PHz - 750 THz (10 nm - 400 nm)
3.1 eV to 124 eV
5. Visible

750 THz - 428.5 THz (400 nm - 700 nm)
1.7 eV - 3.1 eV
6. Infrared

428.5 THz - 300 GHz (700 nm - 1 mm)
1.24 meV - 1.7 eV
7. Microwave

300 GHz - 300 MHz (1 mm - 1 m)
1.24 µeV - 1.24 meV
8. Radio

300 MHz - 3 kHz (1 m - 100 km)
12.4 feV - 1.24 meV

 Radio frequencies occupy the range from a 3 kHz to 300 GHz, although commercially important uses of radio use only a small part of this spectrum. Other types of electromagnetic radiation, with frequencies above the RF range, are infrared, visible light, ultraviolet, X-rays and gamma rays. Since the energy of an individual photon of radio frequency is too low to remove an electron from an atom, radio waves are classified as non-ionizing radiation.
Communication systems
A radio communication system sends signals by radio. Types of radio communication systems deployed depend on technology, standards, regulations, radio spectrum allocation, user requirements, service positioning, and investment.

The radio equipment involved in communication systems includes a transmitter and a receiver, each having an antenna and appropriate terminal equipment such as a microphone at the transmitter and a loudspeaker at the receiver in the case of a voice-communication system. The power consumed in a transmitting station varies depending on the distance of communication and the transmission conditions. The power received at the receiving station is usually only a tiny fraction of the transmitter's output, since communication depends on receiving the information, not the energy that was transmitted.

Classical radio communications systems use frequency-division multiplexing (FDM) as a strategy to split up and share the available radio-frequency bandwidth for use by different party's communications concurrently. Modern radio communication systems include those that divide up a radio-frequency band by time-division multiplexing (TDM) and code-division multiplexing (CDM) as alternatives to the classical FDM strategy. These systems offer different tradeoffs in supporting multiple users, beyond the FDM strategy that was ideal for broadcast radio but less so for applications such as mobile telephony.

A radio communication system may send information only one way. For example, in broadcasting a single transmitter sends signals to many receivers. Two stations may take turns sending and receiving, using a single radio frequency; this is called "simplex." By using two radio frequencies, two stations may continuously and concurrently send and receive signals - this is called "duplex" operation.


Audio

One-way:
AM radio uses amplitude modulation, in which the amplitude of the transmitted signal is made proportional to the sound amplitude captured  by the microphone, while the transmitted frequency remains unchanged. Transmissions are affected by static and interference because lightning and other sources of radio emissions on the same frequency add their amplitudes to the original transmitted amplitude.
In the early part of the 20th century, American AM radio stations broadcast with powers as high as 500 kW, and some could be heard worldwide; these stations' transmitters were commandeered for military use by the US Government during World War II. Currently, the maximum broadcast power for a civilian AM radio station in the United States and Canada is 50 kW, and the majority of stations that emit signals this powerful were grandfathered in (see List of 50 kW AM radio stations in the United States). In 1986 KTNN received the last granted 50,000 watt license. These 50 kW stations are generally called "clear channel" stations (not to be confused with Clear Channel Communications), because within North America each of these stations has exclusive use of its broadcast frequency throughout part or all of the broadcast day.
Bush House, old home of the BBC World Service.

FM broadcast radio sends music and voice with less noise than AM radio. It is often mistakenly thought that FM is higher fidelity than AM, but that is not true. AM is capable of the same audio bandwidth that FM employs. AM receivers typically use narrower filters in the receiver to recover the signal with less noise. AM stereo receivers can reproduce the same audio bandwidth that FM does due to the wider filter used in an AM stereo receiver, but today, AM radios limit the audio band pass to 3–5 kHz. In frequency modulation, amplitude variation at the microphone causes the transmitter frequency to fluctuate. Because the audio signal modulates the frequency and not the amplitude, an FM signal is not subject to static and interference in the same way as AM signals. Due to its need for a wider bandwidth, FM is transmitted in the Very High Frequency (VHF, 30 MHz to 300 MHz) radio spectrum.

VHF radio waves act more like light, traveling in straight lines; hence the reception range is generally limited to about 50–200 miles (80–322 km). During unusual upper atmospheric conditions, FM signals are occasionally reflected back towards the Earth by the ionosphere, resulting in long distance FM reception. FM receivers are subject to the capture effect, which causes the radio to only receive the strongest signal when multiple signals appear on the same frequency. FM receivers are relatively immune to lightning and spark interference. High power is useful in penetrating buildings, diffracting around hills, and refracting in the dense atmosphere near the horizon for some distance beyond the horizon. Consequently, 100,000 watt FM stations can regularly be heard up to 100 miles (160 km) away, and farther, 150 miles (240 km), if there are no competing signals.

A few old, "grandfathered" stations do not conform to these power rules. WBCT-FM (93.7) in Grand Rapids, Michigan, US, runs 320,000 watts ERP, and can increase to 500,000 watts ERP by the terms of its original license. Such a huge power level does not usually help to increase range as much as one might expect, because VHF frequencies travel in nearly straight lines over the horizon and off into space. Nevertheless, when there were fewer FM stations competing, this station could be heard near Bloomington, Illinois, US, almost 300 miles (480 km) away.

FM subcarrier services are secondary signals transmitted in a "piggyback" fashion along with the main program. Special receivers are required to utilize these services. Analog channels may contain alternative programming, such as reading services for the blind, background music or stereo sound signals. In some extremely crowded metropolitan areas, the sub-channel program might be an alternate foreign-language radio program for various ethnic groups. Sub-carriers can also transmit digital data, such as station identification, the current song's name, web addresses, or stock quotes. In some countries, FM radios automatically re-tune themselves to the same channel in a different district by using sub-bands.

Two-way
Aviation voice radios use Aircraft band VHF AM. AM is used so that multiple stations on the same channel can be received. (Use of FM would result in stronger stations blocking out reception of weaker stations due to FM's capture effect). Aircraft fly high enough that their transmitters can be received hundreds of miles away, even though they are using VHF. Degen DE1103, an advanced world mini-receiver with single sideband modulation and dual conversion
Marine voice radios can use single sideband voice (SSB) in the shortwave High Frequency (HF—3 MHz to 30 MHz) radio spectrum for very long ranges or Marine VHF radio / narrowband FM in the VHF spectrum for much shorter ranges. Narrowband FM sacrifices fidelity to make more channels available within the radio spectrum, by using a smaller range of radio frequencies, usually with five kHz of deviation, versus the 75 kHz used by commercial FM broadcasts, and 25 kHz used for TV sound. Government, police, fire and commercial voice services also use narrowband FM on special frequencies. Early police radios used AM receivers to receive one-way dispatches.
Civil and military HF (high frequency) voice services use shortwave radio to contact ships at sea, aircraft and isolated settlements. Most use single sideband voice (SSB), which uses less bandwidth than AM.[23] On an AM radio SSB sounds like ducks quacking, or the adults in a Charlie Brown cartoon. Viewed as a graph of frequency versus power, an AM signal shows power where the frequencies of the voice add and subtract with the main radio frequency. SSB cuts the bandwidth in half by suppressing the carrier and one of the sidebands. This also makes the transmitter about three times more powerful, because it doesn't need to transmit the unused carrier and sideband.
TETRA, Terrestrial Trunked Radio is a digital cell phone system for military, police and ambulances. Commercial services such as XM, World Space and Sirius offer encrypted digital satellite radio.

Telephony

Mobile phones transmit to a local cell site (transmitter/receiver) that ultimately connects to the public switched telephone network (PSTN) through an optic fiber or microwave radio and other network elements. When the mobile phone nears the edge of the cell site's radio coverage area, the central computer switches the phone to a new cell. Cell phones originally used FM, but now most use various digital modulation schemes. Recent developments in Sweden (such as DROPme) allow for the instant downloading of digital material from a radio broadcast (such as a song) to a mobile phone. Satellite phones use satellites rather than cell towers to communicate.

Video

Analog television sends the picture as AM and the sound as AM or FM, with the sound carrier a fixed frequency (4.5 MHz in the NTSC system) away from the video carrier. Analog television also uses a vestigial sideband on the video carrier to reduce the bandwidth required.

Digital television uses 8VSB modulation in North America (under the ATSC digital television standard), and COFDM modulation elsewhere in the world (using the DVB-T standard). A Reed–Solomon error correction code adds redundant correction codes and allows reliable reception during moderate data loss. Although many current and future codec's can be sent in the MPEG transport stream container format, as of 2006 most systems use a standard-definition format almost identical to DVD: MPEG-2 video in anamorphic widescreen and MPEG layer 2 (MP2) audio. High-definition television is possible simply by using a higher-resolution picture, but H.264/AVC is being considered as a replacement video codec in some regions for its improved compression. With the compression and improved modulation involved, a single "channel" can contain a high-definition program and several standard-definition programs.

Navigation

All satellite navigation systems use satellites with precision clocks. The satellite transmits its position, and the time of the transmission. The receiver listens to four satellites, and can figure its position as being on a line that is tangent to a spherical shell around each satellite, determined by the time-of-flight of the radio signals from the satellite. A computer in the receiver does the math. Radio direction-finding is the oldest form of radio navigation. Before 1960 navigators used movable loop antennas to locate commercial AM stations near cities. In some cases they used marine radiolocation beacons, which share a range of frequencies just above AM radio with amateur radio operators. LORAN systems also used time-of-flight radio signals, but from radio stations on the ground.

Very High Frequency Omni directional Range (VOR), systems (used by aircraft), have an antenna array that transmits two signals simultaneously. A directional signal rotates like a lighthouse at a fixed rate. When the directional signal is facing north, a unidirectional signal pulses. By measuring the difference in phase of these two signals, an aircraft can determine its bearing or radial from the station, thus establishing a line of position. An aircraft can get readings from two VORs and locate its position at the intersection of the two radials, known as a "fix."
When the VOR station is collocated with DME (Distance Measuring Equipment), the aircraft can determine its bearing and range from the station, thus providing a fix from only one ground station. Such stations are called VOR/DMEs. The military operates a similar system of navaids, called TACANs, which are often built into VOR stations. Such stations are called VORTACs. Because TACANs include distance measuring equipment, VOR/DME and VORTAC stations are identical in navigation potential to civil aircraft.

Radar

Radar (Radio Detection and Ranging) detects objects at a distance by bouncing radio waves off them. The delay caused by the echo measures the distance. The direction of the beam determines the direction of the reflection. The polarization and frequency of the return can sense the type of surface. Navigational radars scan a wide area two to four times per minute. They use very short waves that reflect from earth and stone. They are common on commercial ships and long-distance commercial aircraft.

General purpose radars generally use navigational radar frequencies, but modulate and polarize the pulse so the receiver can determine the type of surface of the reflector. The best general-purpose radars distinguish the rain of heavy storms, as well as land and vehicles. Some can superimpose sonar data and map data from GPS position.

Search radars scan a wide area with pulses of short radio waves. They usually scan the area two to four times a minute. Sometimes search radars use the Doppler effect to separate moving vehicles from clutter. Targeting radars use the same principle as search radar but scan a much smaller area far more often, usually several times a second or more. Weather radars resemble search radars, but use radio waves with circular polarization and a wavelength to reflect from water droplets. Some weather radar use the Doppler Effect to measure wind speeds.


Data (digital radio)

Most new radio systems are digital, including Digital TV, satellite radio, and Digital Audio Broadcasting. The oldest form of digital broadcast was spark gap telegraphy, used by pioneers such as Marconi. By pressing the key, the operator could send messages in Morse code by energizing a rotating commutating spark gap. The rotating commentator produced a tone in the receiver, where a simple spark gap would produce a hiss, indistinguishable from static. Spark-gap transmitters are now illegal, because their transmissions span several hundred megahertz. This is very wasteful of both radio frequencies and power.

The next advance was continuous wave telegraphy, or CW (Continuous Wave), in which a pure radio frequency, produced by a vacuum tube electronic oscillator was switched on and off by a key. A receiver with a local oscillator would "heterodyne" with the pure radio frequency, creating a whistle-like audio tone. CW uses less than 100 Hz of bandwidth. CW is still used, these days primarily by amateur radio operators (hams). Strictly, on-off keying of a carrier should be known as "Interrupted Continuous Wave" or ICW or on-off keying (OOK).

Radio teletype equipment usually operates on short-wave (HF) and is much loved by the military because they create written information without a skilled operator. They send a bit as one of two tones using frequency-shift keying. Groups of five or seven bits become a character printed by a tele printer. From about 1925 to 1975, radio teletype was how most commercial messages were sent to less developed countries. These are still used by the military and weather services.

Aircraft use a 1200 Baud radio teletype service over VHF to send their ID, altitude and position, and get gate and connecting-flight data. Microwave dishes on satellites, telephone exchanges and TV stations usually use quadrature amplitude modulation (QAM). QAM sends data by changing both the phase and the amplitude of the radio signal. Engineers like QAM because it packs the most bits into a radio signal when given an exclusive (non-shared) fixed narrowband frequency range. Usually the bits are sent in "frames" that repeat. A special bit pattern is used to locate the beginning of a frame.
Modern GPS receivers
Communication systems that limit themselves to a fixed narrowband frequency range are vulnerable to jamming. A variety of jamming-resistant spread spectrum techniques were initially developed for military use, most famously for Global Positioning System satellite transmissions. Commercial use of spread spectrum began in the 1980s. Bluetooth, most cell phones, and the 802.11b version of Wi-Fi each use various forms of spread spectrum.

Systems that need reliability, or that share their frequency with other services, may use "coded orthogonal frequency-division multiplexing" or COFDM. COFDM breaks a digital signal into as many as several hundred slower sub channels. The digital signal is often sent as QAM on the sub channels. Modern COFDM systems use a small computer to make and decode the signal with digital signal processing, which is more flexible and far less expensive than older systems that implemented separate electronic channels. COFDM resists fading and ghosting because the narrow-channel QAM signals can be sent slowly. An adaptive system or one that sends error-correction codes can also resist interference, because most interference can affect only a few of the QAM channels. COFDM is used for Wi-Fi, some cell phones, Digital Radio Mondiale, Eureka 147, and many other local area network, digital TV and radio standards.

Heating

Radio-frequency energy generated for heating of objects is generally not intended to radiate outside of the generating equipment, to prevent interference with other radio signals. Microwave ovens use intense radio waves to heat food. Diathermy equipment is used in surgery for sealing of blood vessels. Induction furnaces are used for melting metal for casting, and induction hobs for cooking.
Amateur radio service
Amateur radio, also known as "ham radio", is a hobby in which enthusiasts are licensed to communicate on a number of bands in the radio frequency spectrum non-commercially and for their own experiments. They may also provide emergency and service assistance in exceptional circumstances. This contribution has been very beneficial in saving lives in many instances.

Radio amateurs use a variety of modes, including efficient ones like Morse code and experimental ones like Low-Frequency Experimental Radio. Several forms of radio were pioneered by radio amateurs and later became commercially important, including FM, single-sideband (SSB), AM, digital packet radio and satellite repeaters. Some amateur frequencies may be disrupted illegally by power-line internet service.

Unlicensed radio services

Unlicensed, government-authorized personal radio services such as Citizens' band radio in Australia, most of the Americas, and Europe, and Family Radio Service and Multi-Use Radio Service in North America exist to provide simple, usually short range communication for individuals and small groups, without the overhead of licensing. Similar services exist in other parts of the world. These radio services involve the use of handheld units. Wi-Fi also operates in unlicensed radio bands and is very widely used to network computers.
Free radio stations, sometimes called pirate radio or "clandestine" stations, are unauthorized, unlicensed, illegal broadcasting stations. These are often low power transmitters operated on sporadic schedules by hobbyists, community activists, or political and cultural dissidents. Some pirate stations operating offshore in parts of Europe and the United Kingdom more closely resembled legal stations, maintaining regular schedules, using high power, and selling commercial advertising time.

Radio control (RC)

Radio remote controls use radio waves to transmit control data to a remote object as in some early forms of guided missile, some early TV remotes and a range of model boats, cars and airplanes. Large industrial remote-controlled equipment such as cranes and switching locomotives now usually use digital radio techniques to ensure safety and reliability.
In Madison Square Garden, at the Electrical Exhibition of 1898, Nikola Tesla successfully demonstrated a radio-controlled boat. He was awarded U.S. patent No. 613,809 for a "Method of and Apparatus for Controlling Mechanism of Moving Vessels or Vehicles".

Internet World

                        Internet   

                  The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link billions of devices worldwide. It began in California in 1969 and began connecting to networks on other continents in 1988. It is a network of networks ("internet" is short for "inter-networking") that consists of millions of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries an extensive range of information resources and services, such as mobile apps including social media apps, the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, multiplayer online games, telephony, and peer-to-peer networks for file sharing.Father of internet is called Vint Cerf.
The origins of the Internet date back to research commissioned by the United States government in the 1960s to build robust, fault-tolerant communication via computer networks. The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1980s. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The linking of international and commercial networks from 1988 on wards marks the beginning of the transition to the modern Internet, and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network.

Although the Internet has been widely used by academia, college students and many government approved business since the 1980s, its introduction to the public in the late 1980s and early 1990s incorporated its services and technologies into virtually every aspect of modern human life. As of 2014, 38 percent of the world's human population has used the services of the Internet within the past year over 100 times more people than were using it in 1995.  Internet use grew rapidly in the West from the mid-1990s to early 2000s and from the late 1990s to present in the developing world.
Most traditional communications media, including telephony and television, are being reshaped or redefined by the Internet, giving birth to new services such as Internet telephony and Internet television. Newspaper, book, and other print publishing are adapting to website technology, or are reshaped into blogging and web feeds. The entertainment industry, including music, film, and gaming, was initially the fastest growing online segment. The Internet has enabled and accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has grown exponentially both for major retailers and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries.
The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.

Terminology
The Internet, referring to the specific global system of interconnected Internet Protocol (IP) networks, is a proper noun and may be written with an initial capital letter. In the media and common use it is often not capitalized, viz. the internet. Some guides specify that the word should be capitalized when used as a noun, but not capitalized when used as an adjective. The Internet is also often referred to as the Net.

Historically the word internetted was used, uncapitalized, as early as 1849 as an adjective meaning "Interconnected; interwoven". The designers of early computer networks used internet both as a noun and as a verb in shorthand form of internetwork or internetworking, meaning interconnecting computer networks.
The terms Internet and World Wide Web are often used interchangeably in everyday speech; it is common to speak of "going on the Internet" when invoking a web browser to view web pages. However, the World Wide Web or the Web is only one of a large number of Internet services. The Web is a collection of interconnected documents (web pages) and other web resources, linked by hyperlinks and URLs. As another point of comparison, Hypertext Transfer Protocol, or HTTP, is the language used on the Web for information transfer, yet it is just one of many languages or protocols that can be used for communication on the Internet. The term Interweb is a portmanteau of Internet and World Wide Web typically used sarcastically to parody a technically unsavvy user.


History  

Research into packet switching started in the early 1960s and packet switched networks such as Mark I at NPL in the UK,  ARPANET, CYCLADES, Merit Network,  Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of protocols. The ARPANET in particular led to the development of protocols for internetworking, where multiple separate networks could be joined together into a network of networks.
The first two nodes of what would become the ARPANET were interconnected between Leonard Kleinrock's Network Measurement Center at the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering and Applied Science and Douglas Engelbart's NLS system at SRI International (SRI) in Menlo Park, California, on 29 October 1969. The third site on the ARPANET was the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara, and the fourth was the University of Utah Graphics Department. In an early sign of future growth, there were already fifteen sites connected to the young ARPANET by the end of 1971. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.

Early international collaborations on the ARPANET were rare. European developers were concerned with developing the X.25 networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in June 1973, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and Peter T. Kirstein's research group in the United Kingdom, initially at the Institute of Computer Science, University of London and later at University College London. In December 1974, RFC 675 – Specification of Internet Transmission Control Program, by Vinton Cerf, Yogen Dalal, and Carl Sunshine, used the term internet as a shorthand for internetworking and later RFCs repeat this use. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which permitted worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNET) provided access to supercomputer sites in the United States from research and education organizations, first at 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. Commercial Internet service providers (ISPs) began to emerge in the late 1980s and early 1990s. The ARPANET was decommissioned in 1990. The Internet was fully commercialized in the U.S. by 1995 when NSFNET was decommissioned, removing the last restrictions on the use of the Internet to carry commercial traffic. The Internet rapidly expanded in Europe and Australia in the mid to late 1980s and to Asia in the late 1980s and early 1990s. The beginning of dedicated transatlantic communication between the NSFNET and networks in Europe began a low-speed satellite relay between Princeton University and Stockholm, Sweden in December of 1988.  Although other network protocols such as UUCP had global reach well before this time, this marked the beginning of the "Internet proper" as an intercontinental network.  Slightly over a year later in March 1990, the first high speed T1 (1.5 mbs) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites.  Six months later Tim Berners-Lee would begin writing WorldWideWeb, the first web browser after two years of lobbying CERN management.

Since 1995 the Internet has tremendously impacted culture and commerce, including the rise of near instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web[37] with its discussion forums, blogs, social networking, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more.

Worldwide Internet users   2005 2010 2014a
World population 6.5 billion 6.9 billion 7.2 billion
Not using the Internet 84% 70% 60%
Using the Internet 16% 30% 40%
Users in the developing world 8% 21% 32%
Users in the developed world 51% 67% 78


 
The Internet continues to grow, driven by ever greater amounts of online information and knowledge, commerce, entertainment and social networking. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%.  This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.  As of 31 March 2011, the estimated total number of Internet users was 2.095 billion (30.2% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.


Governance

The Internet is a globally distributed network comprising many voluntarily interconnected
autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.
To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN), headquartered in the neighborhood of Playa Vista, in Los Angeles, California. ICANN is the authority that coordinates the assignment of unique identifiers for use on the Internet, including domain names, Internet Protocol (IP) addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces, in which names and numbers are uniquely assigned, are essential for maintaining the global reach of the Internet. ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. ICANN's role in coordinating the assignment of unique identifiers distinguishes it as perhaps the only central coordinating body for the global Internet.
Regional Internet Registries (RIRs) allocate IP addresses:

1. African Network Information Center (AfriNIC) for Africa
2. American Registry for Internet Numbers (ARIN) for North America
3. Asia-Pacific Network Information Centre (APNIC) for Asia and the Pacific region
4. Latin American and Caribbean Internet Addresses Registry (LACNIC) for Latin America and the Caribbean region
5. Reseaux IP Europeans - Network Coordination Centre (RIPE NCC) for Europe, the Middle East, and Central Asia

6. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, continues to have final approval over changes to the DNS root zone.

The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world".[49] Its members include individuals (anyone may join) as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.
Access
Common methods of Internet access by users include dial-up with a computer modem via telephone circuits, broadband over coaxial cable, fiber optic or copper wires, Wi-Fi, satellite and cellular telephone technology (3G, 4G). The Internet may often be accessed from computers in libraries and Internet cafes. Internet access points exist in many public places such as airport halls and coffee shops. Various terms are used, such as public Internet kiosk, public access terminal, and Web payphone. Many hotels also have public terminals, though these are usually fee-based. These terminals are widely accessed for various usages, such as ticket booking, bank deposit, or online payment. Wi-Fi provides wireless access to the Internet via local computer networks. Hotspots providing such access include Wi-Fi cafes, where users need to bring their own wireless devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based.
Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering large city areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such places as a park bench. Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services. High-end mobile phones such as Smartphone's in general come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, though this is not as widely used. An Internet access provider and protocol matrix differentiates the methods used to get online.

Protocols

While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the Internet Engineering Task Force (IETF). The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site.
The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.
The Internet standards describe a framework known as the Internet protocol suite. This is a model architecture that divides methods into a layered system of protocols, originally documented in RFC 1122 and RFC 1123. The layers correspond to the environment or scope in which their services operate. At the top is the application layer, the space for the application-specific networking methods used in software applications. For example, a web browser program uses the client-server application model and a specific protocol of interaction between servers and clients, while many file-sharing systems use a peer-to-peer paradigm. Below this top layer, the transport layer connects applications on different hosts with a logical channel through the network with appropriate data exchange methods.
Underlying these layers are the networking technologies that interconnect networks at their borders and hosts via the physical connections. The Internet layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and routes their traffic via intermediate (transit) networks. Last, at the bottom of the architecture is the link layer, which provides connectivity between hosts on the same network link, such as a physical connection in form of a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware, which the model therefore does not concern itself with in any detail. Other models have been developed, such as the OSI model that attempt to be comprehensive in every aspect of communications. While many similarities exist between the models, they are not compatible in the details of description or implementation; indeed, TCP/IP protocols are usually included in the discussion of OSI networking.
As user data is processed through the protocol stack, each abstraction layer adds encapsulation information at the sending host. Data is transmitted over the wire at the link level between hosts and routers. Encapsulation is removed by the receiving host. Intermediate relays update link encapsulation at each hop, and inspect the IP layer for routing purposes.

The most prominent component of the Internet model is the Internet Protocol (IP), which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and in essence establishes the Internet itself. Internet Protocol Version 4 (IPv4) is the initial version used on the first generation of the Internet and is still in dominant use. It was designed to address up to 4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011,  when the global address allocation pool was exhausted. A new protocol version, IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all resource managers to plan rapid adoption and conversion.
IPv6 is not directly interoperable by design with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for networking devices that need to communicate on both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol. Network infrastructure, however, is still lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts, e.g., peering agreements, and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.

Services

The Internet carries many network services, most prominently mobile apps such as social media apps, the World Wide Web, electronic mail, multiplayer online games, Internet telephony, and file sharing services.

World Wide Web

This NeXT Computer was used by Tim Berners-Lee at CERN and became the world's first Web server. Many people use the terms Internet and World Wide Web, or just the Web, interchangeably, but the two terms are not synonymous. The World Wide Web is the primary application that billions of people use on the Internet, and it has changed their lives immeasurably. However, the Internet provides many other services. The Web is a global set of documents, images and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs). URIs symbolically identifies services, servers, and other databases, and the documents and resources that they can provide. Hypertext Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.

World Wide Web browser software, such as Microsoft's Internet Explorer, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, lets users navigate from one web page to another via hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content that runs while the user is interacting with the page. Client-side software can include animations, games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo! and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed media, books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information on a large scale.
The Web has also enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. However, publishing and maintaining large, professional web sites with attractive, diverse and up-to-date information is still a difficult and expensive proposition. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result.
One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work.[original research? Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and Twitter currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts.
When the Web developed in the 1990s, a typical web page was stored in completed form on a web server, formatted in HTML, complete for transmission to a web browser in response to a request. Over time, the process of creating and serving web pages has become dynamic, creating flexible design, layout, and content. Websites are often created using content management software with, initially, very little content. Contributors to these systems, who may be paid staff, members of an organization or the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.

Communication

Email is an important communications service available on the Internet. The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Pictures, documents and other files are sent as email attachments. Emails can be cc-ed to multiple email addresses. Internet telephony is another common communications service made possible by the creation of the Internet. VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP can be free or cost much less than a traditional telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL. VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a personal computer.
Voice quality can still vary from call to call, but is often equal to and can even exceed that of traditional calls. Remaining problems for VoIP include emergency telephone number dialing and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Older traditional phones with no "extra features" may be line-powered only and operate during a power failure; VoIP can never do so without a backup power source for the phone equipment and the Internet access devices. VoIP has also become increasingly popular for gaming applications, as a form of communication between players. Popular VoIP clients for gaming include Ventrilo and Teamspeak. Modern video game consoles also offer VoIP chat features.

Data transfer

File sharing is an example of transferring large amounts of data across the Internet. A computer file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or File Transfer Protocol (FTP) server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads too many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed usually fully encrypted across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.
Streaming media is the real-time delivery of digital media for the immediate consumption or enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live audio and video productions. They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where usually audio material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide.  Digital media streaming increases the demand for network bandwidth. For example, standard image quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p.
Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-rate video, the picture either is usually small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses a flash-based web player to stream and show video files. Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily. Currently, YouTube also uses an HTML5 player.

Security

Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system". The structure was found to be highly robust to random failures, yet, very vulnerable to intentional attacks. The Internet structure and its usage characteristics have been studied extensively and the possibility of developing alternative structures has been investigated.
Internet resources, hardware, and software components are the target of malicious attempts to gain unauthorized control to cause interruptions, or access private information. Such attempts include computer viruses which copy with the help of humans, computer worms which copy themselves automatically, denial of service attacks, ransom-ware, botnets, and spyware that reports on the activity and typing of users. Usually these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of cyber warfare using similar methods on a large scale.