10 Most Notorious Hackers of All Time

There are many notable hackers around the world. In this article, We’ll be talking specifically about famous hackers that don hats of black. Here are five of the most widely known black hatters and what happened to them for their recklessness.


Jonathan James

hacker-jonathan-jamesJonathon James Known as “comrade” by many online, 15-year-old Jonathan James was the first juvenile convicted and jailed in the United States for hacking. James hacked into companies like Bell South, as well as the Miami-Dade school system and the Department of Defense in 1999. He gained access to information like the source code responsible for operating the International Space Station.

Once NASA detected the breach, the space agency shut down their computers for three weeks, apparently losing an estimated $41,000. Arrested on January 26, 2000, James plea-bargained and was sentenced to house arrest and probation. He later served six months in an Alabama prison after failing a drug test and thus violating his probation. Boston Market, Barnes & Noble, Office Max and other companies were victims of a 2007 massive hack. James was investigated by law enforcement for the crimes despite his denying any involvement.

James was found dead from a self-inflicted gunshot wound on May 18, 2008. In his suicide note he wrote he was troubled by the justice system and believed he would be prosecuted for newer crimes with which he had nothing to do.

Gary McKinnon

hacker-gary-mckinnonfamous hackersGary McKinnon was known by his Internet handle, “Solo.” Using that name, he coordinated what would become the largest military computer hack of all time. The allegations are that he, over a 13-month period from February 2001 to March 2002, illegally gained access to 97 computers belonging to the U.S. Armed Forces and NASA.

McKinnon claimed that he was only searching for information related to free energy suppression and UFO activity cover-ups. But according to U.S. authorities, he deleted a number of critical files, rendering over 300 computers inoperable and resulting in over $700,000 in damages.

Being of Scottish descent and operating out of the United Kingdom, McKinnon was able to dodge the American government for a time. As of today, he continues to fight against extradition to the United States.

Kevin Mitnick

hacker-kevin-mitnickfamous hackersKevin Mitnick’s journey as a computer hacker has been so interesting and compelling that the U.S. Department of Justice called him the “most wanted computer criminal in U.S. history.” His story is so wild that it was the basis for two featured films.

What did he do? After serving a year in prison for hacking into the Digital Equipment Corporation’s network, he was let out for 3 years of supervised release. Near the end of that period, however, he fled and went on a 2.5-year hacking spree that involved breaching the national defense warning system and stealing corporate secrets.

Mitnick was eventually caught and convicted, ending with a 5-year prison sentence. After serving those years fully, he became a consultant and public speaker for computer security. He now runs Mitnick Security Consulting, LLC.

Kevin Poulsen

hacker-kevin-poulsenfamous hackers in historyKevin Poulsen, also known as “Dark Dante,” gained his fifteen minutes of fame by utilizing his intricate knowledge of telephone systems. At one point, he hacked a radio station’s phone lines and fixed himself as the winning caller, earning him a brand new Porsche. According to media, he was called the “Hannibal Lecter of computer crime.”

He then earned his way onto the FBI’s wanted list when he hacked into federal systems and stole wiretap information. Funny enough, he was later captured in a supermarket and sentenced to 51 months in prison, as well paying $56,000 in restitution.

Like Kevin Mitnick, Poulsen changed his ways after being released from prison. He began working as a journalist and is now a senior editor for Wired News. At one point, he even helped law enforcement to identify 744 sex offenders on MySpace.

Albert Gonzalez

hacker-albert-gonzalezfamous hackers in historyAlbert Gonzalez paved his way to Internet fame when he collected over 170 million credit card and ATM card numbers over a period of 2 years. Yep. That’s equal to a little over half the population of the United States.

Gonzalez started off as the leader of a hacker group known as ShadowCrew. This group would go on to steal 1.5 million credit card numbers and sell them online for profit. ShadowCrew also fabricated fraudulent passports, health insurance cards, and birth certificates for identity theft crimes totaling $4.3 million stolen.

The big bucks wouldn’t come until later, when Gonzalez hacked into the databases of TJX Companies and Heartland Payment Systems for their stored credit card numbers. In 2010, Gonzalez was sentenced to prison for 20 years (2 sentences of 20 years to be served out simultaneously).

Stephen Wozniak

stephen-wozniakFamous for being the co-founder of Apple, Stephen “Woz” Wozniak began his ‘white-hat’ hacking career with ‘phone phreaking’ – slang for bypassing the phone system. While studying at the University of California he made devices for his friends called ‘blue boxes’ that allowed them to make free long distance phone calls. Wozniak allegedly used one such device to call the Pope. He later dropped out of university after he began work on an idea for a computer. He formed Apple Computer with his friend Steve Jobs and the rest, as they say, is history.

Adrian Lamo

C_Manning_FinishThe “homeless hacker”, Adriam Lamo, is also one of the world’s most hated hackers after turning in Chelsea Manning for leaking classified US Army documents.

Before that, he hacked the computer of The New York Times in 2002 gaining access to private databases including information of all 3,000 authors of op-eds at the paper. Sentenced two years probation and fined nearly $65,000, Lamo went on to bigger fame later in life.

Lamo turned in Chelsea Manning for being a source to WikiLeaks. He said Manning’s long sentence would be a “lasting regret.”

David L. Smith

dlsmithDavid Smith authored the Melissa worm virus; that is, the first successful email-aware virus distributed in the Usenet discussion group alt. sex. Arrested and sentenced for causing more than $80 million in damage, David Smith remains one of the world’s original notorious hackers after serving 20 months in jail.

There are other notable hackers, such as Max Ray “Iceman” Butler (ran up over $86 million in fraudulent charges), Kevin Poulson (military and phone company hacks), Jeremy Hammond (Anonymous) and Albert Gonzalez ( hack of TJ Maxx and other retailers). Of course, there are entire hacker groups, such as Anonymous, as well.

John McAfee

JohnMcAfee-300x203When John McAfee lived in Belize, he planned to study plants. Probably some psycho-active plants. He had a lab for this. Authorities seized his property for creating drugs in this lab, claims McAfee, after an official came seeking political bribes from the gringo. To get back at the Belize government and prove their corruption, he hacked every major computer from Belize government bureaucracies. He found evidence implicating officials in corruption, laundering, drug running and murder. He had to organize his own escape out of Belize to avoid arrest. He did this by faking a heart attack.

Today McAfee lays low, believing he is routinely being tracked by law enforcement. He recently posted on social media he got into a shootout with police after having been arrested.

Sven Jaschan

sven_jaschanJaschan was found guilty of writing the Netsky and Sasser worms in 2004 while he was still a teenager. The viruses were found to be responsible for 70 per cent of all the malware seen spreading over the internet at the time. Jaschan received a suspended sentence and three years probation for his crimes. He was also hired by a security company.




  • References

Make Use Of



  • Last Update 28 June 2016

Hack World


In the computer security context, a hacker is someone who seeks and exploits weaknesses in a computer system or computer network. Hackers may be motivated by a multitude of reasons, such as profit, protest, challenge, enjoyment, or to evaluate those weaknesses to assist in removing them. The subculture that has evolved around hackers is often referred to as the computer underground.

There is a longstanding controversy about the term’s true meaning. In this controversy, the term hacker is reclaimed by computer programmers who argue that it refers simply to someone with an advanced understanding of computers and computer networks, and that cracker is the more appropriate term for those who break into computers, whether computer criminal (black hats) or computer security expert (white hats) – but a recent article concluded that: “…the black-hat meaning still prevails among the general public”.



Several subgroups of the computer underground with different attitudes use different terms to demarcate themselves from each other, or try to exclude some specific group with whom they do not agree.

Eric S. Raymond, author of The New Hacker’s Dictionary, advocates that members of the computer underground should be called crackers. Yet, those people see themselves as hackers and even try to include the views of Raymond in what they see as a wider hacker culture, a view that Raymond has harshly rejected. Instead of a hacker/cracker dichotomy, they emphasize a spectrum of different categories, such as white hat, grey hat, black hat and script kiddie. In contrast to Raymond, they usually reserve the term cracker for more malicious activity.

According to Ralph D. Clifford, a cracker or cracking is to “gain unauthorized access to a computer in order to commit another crime such as destroying information contained in that system”. These subgroups may also be defined by the legal status of their activities.

White hat

A white hat hacker breaks security for non-malicious reasons, either to test their own security system, perform penetration tests or vulnerability assessments for a client – or while working for a security company which makes security software. The term is generally synonymous with ethical hacker, and the EC-Council, among others, have developed certifications, courseware, classes, and online training covering the diverse arena of ethical hacking.

Black hat

A “black hat” hacker is a hacker who “violates computer security for little reason beyond maliciousness or for personal gain” (Moore, 2005). The term was coined by Richard Stallman, to contrast the maliciousness of a criminal hacker versus the spirit of playfulness and exploration of hacker culture, or the ethos of the white hat hacker who performs hacking duties to identify places to repair. Black hat hackers form the stereotypical, illegal hacking groups often portrayed in popular culture, and are “the epitome of all that the public fears in a computer criminal”.


Grey hat

A grey hat hacker lies between a black hat and a white hat hacker. A grey hat hacker may surf the Internet and hack into a computer system for the sole purpose of notifying the administrator that their system has a security defect, for example. They may then offer to correct the defect for a fee. Grey hat hackers sometimes find the defect of a system and publish the facts to the world instead of a group of people. Even though grey hat hackers may not necessarily perform hacking for their personal gain, unauthorized access to a system can be considered illegal and unethical.

Elite hacker

A social status among hackers, elite is used to describe the most skilled. Newly discovered exploits circulate among these hackers. Elite groups such as Masters of Deception conferred a kind of credibility on their members.

Script kiddie

A script kiddie (also known as a skid or skiddie) is an unskilled hacker who breaks into computer systems by using automated tools written by others (usually by other black hat hackers), hence the term script (i.e. a prearranged plan or set of activities) kiddie (i.e. kid, child—an individual lacking knowledge and experience, immature), usually with little understanding of the underlying concept.


A neophyte (“newbie”, or “noob”) is someone who is new to hacking or phreaking and has almost no knowledge or experience of the workings of technology and hacking.

Blue hat

A blue hat hacker is someone outside computer security consulting firms who is used to bug-test a system prior to its launch, looking for exploits so they can be closed. Microsoft also uses the term BlueHat to represent a series of security briefing events.


A hacktivist is a hacker who utilizes technology to publicize a social, ideological, religious or political message.

Hacktivism can be divided into two main groups:

Cyberterrorism — Activities involving website defacement or denial-of-service attacks; and,

Freedom of information — Making information that is not public, or is public in non-machine-readable formats, accessible to the public.

Nation state

Intelligence agencies and cyberwarfare operatives of nation states.

Organized criminal gangs

Groups of hackers that carry out organized criminal activities for profit.

To Read More, Go to the next page

, ,

Xbox One vs. PS4

PlayStation 4 review: Great gaming for 2016 and beyond

THE GOOD The PlayStation 4 serves up dazzling graphics, runs on a simplified and logical interface and boasts a fantastic controller. It has the upper hand in indie games and can stream a constantly growing list of legacy titles via PlayStation Now. The PS4 makes it super-easy to capture and broadcast gameplay online and generally delivers a zippier performance than its direct competition. It also doubles as a Blu-ray player and solid media-streaming box.

THE BAD The Xbox One has a slight edge in non-gaming entertainment features such as streaming content and media portal apps.

THE BOTTOM LINE The PlayStation 4’s beautiful graphics, smart interface, blazing performance, near-perfect controller and better indie offerings give it an edge over the Xbox One — though that edge is ever-shrinking.







As the PlayStation 4 quickly approaches its third birthday, let’s reassess the current state of Sony’s flagship game machine.

When the competing consoles were first released, we gave the edge to the PS4 over the Xbox One. And at this point in time, the PS4 is still looking good. It continues to improve thanks to regular system firmware updates and a consistent stream of console-exclusive independent games. Exclusive AAA-titles are less frequent, but the PS4 has some promising titles coming down the pike, including The Last Guardian and Horizon Zero Dawn, both scheduled to arrive in 2016. But if you’re concentrating more on the exclusives 2015 has to offer, the Xbox One wins that immediate holiday battle.

The majority of games are available on both platforms and PC. We call these multiplatform games. In our testing, we’ve found that a handful of titles perform better on a PlayStation 4. The most recent example of this is Call of Duty: Black Ops III.

To be clear: The PS4 and the Xbox One are very closely matched. Both offer a growing library of third-party games — mainstays like the Call of Duty and Assassin’s Creed series, as well as newer titles like Fallout 4 and Rainbow Six Siege. And both double as full-service entertainment systems, with built-in Blu-ray players and streaming services like Netflix, YouTube and Hulu Plus.

At this stage in the game we’re still partial to the PlayStation 4. Our reasoning is below — along with a few caveats about areas where the PS4 can improve.

PS4 consoles and bundles

No matter how you purchase a PlayStation 4, it’ll ship with an HDMI cable, a DualShock 4 wireless controller, a USB charging cable and an earbud headset for game chat. The standard console goes for $350 though it seems like at almost any given time a PS4 bundle is being offered by Sony or another retailer. After a recent $50 price cut, the PS4 and Xbox One are nearly identically priced.


PS4 bundles usually provide the best overall value if you’re looking to get started from scratch. Some franchise titles get exclusive PS4 consoles included in their bundles, most recently seen with the Star Wars: Battlefront PS4 SKU.

Major PS4 exclusive games (available now or soon):

  • Bloodborne
  • Uncharted: The Nathan Drake Collection
  • Infamous: Second Son
  • LittleBigPlanet 3
  • Until Dawn

Major PS4 exclusive games due by 2016 and beyond:

  • Uncharted 4: A Thief’s End
  • The Last Guardian
  • Horizon Zero Dawn
  • No Man’s Sky (console exclusive)
  • Dreams
  • Street Fighter V (console exclusive)
  • Ratchet and Clank reboot

PS4 ecosystem

The PlayStation ecosystem includes various products with some shared functionality. For example, the PS Vita can stream PS4 games via “remote play” mode. The PlayStation TV (PSTV) can also stream PS4 games as well as play Vita games and legacy PlayStation titles. Select phones from Sony’s Xperia line can also stream gameplay from the PlayStation 4.

Sony also offers PlayStation Vue, a cable TV alternative starting at $50 a month available on the PS3 and PS4. PlayStation Now, the company’s legacy game-streaming service, is available on every PlayStation platform and lets subscribers play games from the Sony vault. If you purchase in three-month increments, it works out to around $15 a month.

Firmware updates

Sony regularly updates the PS4’s firmware — as of this writing it’s currently at version 3.11. Recent updates to the console have brought along features like:

  • YouTube live game broadcasting
  • Party chat
  • Game communities and events sections
  • Suspend/resume: The console can be put into “rest mode” and then woken up to resume gameplay without needing to relaunch a game.
  • Share Play: Now PS4 owners can “host” a play session and “hand off” the game controller for up to 60 minutes to one of their friends on the PlayStation Network. At the end of the session players can simply restart. Share Play can also work with coop games that let two players engage at the same time. Share Play works with any PS4 game and only the host player needs a copy of the game and a PlayStation Plus membership.
  • Restore: You can now back up data stored on a PS4 and restore it.

The 2.00 firmware had some notable bugs, but Sony has addressed them with a recent 2.01 update. Firmware version 2.02 (also a forced update) brought along more universal stability to the system.


PS4 pros

Here are the areas where the PS4 excels — and where it has an edge over the Xbox One:

PlayStation Plus

Compared with Xbox Live’s Gold membership, PlayStation Plus still makes it out as the better overall deal. The Instant Game Collection titles that come with the subscription can be played across various PlayStation platforms and the quality of these titles tends to be higher, though recently the free games have started to underwhelm. You need PlayStation Plus to play online, and it also offers discounts, exclusive betas and demos, cloud save storage, game trials and automatic system updates.

PlayStation Plus is $50, £40 or AU$70 a year, while Xbox Live Gold is $60, £40 or AU$85 per year, although you may be able to get discounted vouchers from retailers.

System interface

Overall, the PS4’s interface feels zippier than the Xbox One’s, even with Xbox’s new fall 2015 update. Games install quicker and moving around menus is a much smoother experience. It’s by far an easier system to navigate, as opposed to the Xbox One’s sometimes confusing presentation.second-post-playstation-41

Game streaming

Sony’s answer to backward compatibility is PlayStation Now, a subscription service that allows PS4 owners to stream a game over the Internet. That said, your experience will vary depending on your Internet connection. Suffice it to say, playing shooters and other “twitch” games on PS Now isn’t great, but it’s certainly improving — as is the growing collection of playable titles. When it launched we wrote PS Now off. Now we think it’s a viable option for those who are passionate about legacy PlayStation games.

Xbox One recently introduced Xbox 360 backward compatibility, which works with physical media, as opposed to PS Now’s digital-only operation.


Aside from a zippier all-around experience in the system software, the PS4 tends to install games quicker than the Xbox One. There’s also some evidence that multiplatform games play better and run in higher resolutions than they do on the Xbox One. In some cases, the PS4 will also play at a higher frame rate than the Xbox One.

Game broadcasting and social sharing

The DualShock 4 controller has a button dedicated to broadcasting and sharing options. The whole feature set is wonderfully tied into the fabric of the system and makes sharing fairly painless. Players can instantly snap screenshots, tweet photos and broadcast gameplay to Twitch (a free online streaming-gaming video service), all within a few clicks.

PS4 owners can also save these videos and screens and put them on a USB drive, edit them on the PS4 or upload them to YouTube, Facebook or Twitter.

It’s worth noting that publishers can block the ability to share content — it’s usually done to avoid leaking major plot spoilers in a game.second-post-playstation-4-box

Independent games

Sony has committed to bringing popular independent games to PS4. While a lot of these titles have previously been available for PC, games like Rocket League, No Man’s Sky and SOMA (among many others) will only see console debuts on PS4.

User-accessible hard drive

The PS4 ships with a 500GB, 5,400rpm hard drive (and is also available in a 1TB model), but you can easily swap it out for a 2.5-inch SATA drive with a larger capacity or a SSHD or SSD for potentially increased performance. The Xbox One, by comparison, doesn’t allow the swapping of hard drives — instead you have to attach an external USB drive.

DualShock 4 controller

The DualShock 4 is the best PlayStation controller yet and features a front-facing touchpad that can also be clicked. Players can bring their own headphones and plug them directly into the controller so they don’t disturb the neighbors during nighttime gaming.second-post-playstation-414

The controller is very comfortable and can be charged with a Micro-USB cable. The only real downside is the battery: unlike the Xbox One controller’s battery, the PS4’s can’t be replaced. Its battery life is good, but not great.


Media playback

The PS4’s media player app supports a wide range of file formats and codecs. Files can be played off a home DLNA server or USB drive.

PS4 cons

Here are the areas where the PS4 could use a little work:

Media apps: Good, but slightly lagging behind the Xbox One

The PS4 offers mainstay media and entertainment apps like Amazon Instant Video, Netflix and Hulu Plus, but is noticeably missing apps that the Xbox One does have, such as ESPN, Comedy Central, Fox and Fios.

There is support for sports, though — PS4 owners can use MLB, NBA (only on PS4), NFL Sunday Ticket and NHL apps.

PlayStation Plus cloud storage

Cloud save storage was recently bumped up to a generous 10GB worth of data, but only for PS+ members. We also think cloud saves should sync automatically no matter which PS4 you’re playing on, instead of gamers having to manually upload saves from machines that aren’t their “primary console.” In this specific category, Xbox One has PS4 beat.


Wonky eject button

A collection of current PS4 owners have experienced an issue with the PS4’s touch-sensitive eject button. Some complain that it can engage by itself, causing the console to either eject a disc during play or randomly make beeps.

Sony has since corrected this and now 1TB consoles ship with a tactile eject button.

PlayStation VR

PlayStation VR will finally be with us this October and we can’t wait to stick our faces in it. Sony’s PS4 virtual reality headset is coming in way cheaper than the likes of Oculus Rift or HTC’s Vive, with a RRP of just £350/$399.

In here (Virtual Reality post) you’ll find our guide to the best preorder deals out there for the headset, with prices starting around the aforementioned £350/$399. Pricier options also include the PS4 camera. Don’t expect many discounts before release, but we’ll keep you posted if any pop up.

More Images : 


To Read About Xbox One Go To The Next Page


Game Consoles and Generation

Why Buy Video Game Consoles?

From the late ’80s to the beginning of 1999, video game consoles were mostly single-function devices. Then, when Sony launched the PlayStation 2 with a built-in DVD player, gaming consoles became a major part of our entertainment hubs. Today, consoles include Blu-ray players and entertainment streaming services. Whether playing video games, watching a new Blu-ray or listening to a music service, a new console has something for everyone in the family.

While video game consoles are all-in-one entertainment machines, the most important part of them is their ability to play games. Although many gamers will say that computers have the best graphics, new gaming technology is birthed on consoles. Also, some of gaming’s biggest franchises are only available on consoles. Finally, consoles tend to be future-ready devices, as manufacturers know that you’ll be using them for at least six years. This means that over the next few years, new peripherals and technologies will supplement the consoles.

The best game consoles are the Sony PlayStation 4, the Microsoft Xbox One and the Microsoft Xbox 360. There are also several micro-consoles available at a lower price point. Be sure to read our articles about video game consoles to learn more about modern gaming.

Video Game Consoles: What to Look For

First and foremost, a video game console has to have the ability to play exciting games. We judged these consoles for their games, media apps and technological capabilities. We favor consoles with powerful tech that will handle high-end games for the next five to seven years. On top of that, we want great gaming experiences, but because many games are available on multiple consoles, we judged the systems on their exclusive content. Finally, we looked at each console’s non-gaming functions such as entertainment apps and social capabilities. Below are the criteria we used to evaluate video game consoles:


Consoles will never trump PC gaming in sheer computing power, but the advantage of a console is simplicity. Once you plug it in, you are set for years to come. Still, you want a console that will perform well for years to come. We looked at the muscle of each console and compared their specs against each other. It is important to note that the most powerful console won’t necessarily have the best games. However, the most powerful console will likely have the best-looking games.


Video game consoles today offer online gaming and loads of additional features. We looked for consoles that offer remote play through a handheld device, excellent online connectivity and extra features such as content streaming. Games are more social than ever thanks to online components, so you should look for a system that offers headset support to talk with your friends.

Multimedia & Social

Gaming consoles are still primarily for playing games, but they are increasingly becoming the entertainment hub for the entire family. For example, the Xbox One has an HDMI-in port so you can plug your cable box directly into the console and play games while watching a football game. We looked at each console’s media apps and social capabilities.

Help & Support

A video game console manufacturer should provide timely and comprehensive help and support for technical issues by offering several contact methods, including online chat, telephone and email. The manufacturer should provide detailed information online about its consoles. We also looked for manufacturers that offer good warranties on new products.

As you can see, there are quite a few aspects to consider before making a final decision on which video game console fits your lifestyle best. Each system offers a unique style of gaming, and your specific entertainment preferences will determine which console is right for you.


First generation

The Magnavox Odyssey was the first video game console, released in 1972.

The first video games appeared in the 1960s. They were played on massive computers connected to vector displays, notanalog televisions. Ralph H. Baer conceived the idea of a home video game in 1951. In the late 1960s, while working forSanders Associates, Baer created a series of video game console designs. One of these designs, which gained the nickname of the 1966 “Brown Box”, featured changeable game modes and was demonstrated to several TV manufacturers, ultimately leading to an agreement between Sanders Associates and Magnavox.


Magnavox Odyssey Console Set

In 1972, Magnavox released the Magnavox Odyssey, the first home video game console which could be connected to a TV set. Ralph Baer’s initial design had called for a huge row of switches that would allow players to turn on and off certain components of the console (the Odyssey lacked a CPU) to create slightly different games like tennis, volleyball, hockey, and chase. Magnavox replaced the switch design with separate cartridges for each game. Although Baer had sketched up ideas for cartridges that could include new components for new games, the carts released by Magnavox all served the same function as the switches and allowed players to choose from the Odyssey’s built-in games.

The Odyssey initially sold about 100,000 units, making it moderately successful, and it was not until Atari’s arcade game Pong popularized video games that the public began to take more notice of the emerging industry. By autumn 1975, Magnavox, bowing to the popularity of Pong, cancelled the Odyssey and released a scaled-down version that played only Pong and hockey, the Odyssey 100. A second, “higher end” console, the Odyssey 200, was released with the 100 and added on-screen scoring, up to four players, and a third game—Smash. Almost simultaneously released with Atari’s own home Pong console through Sears, these consoles jump-started the consumer market. All three of the new consoles used simpler designs than the original Odyssey did with no board game pieces or extra cartridges.

In the years that followed, the market saw many companies rushing similar consoles to market. After General Instrument released their inexpensive microchips, each containing a complete console on a single chip, many small developers began releasing consoles that looked different externally, but internally were playing exactly the same games.

Most of the consoles from this era were dedicated consoles playing only the games that came with the console. These video game consoles were often just called video games, because there was little reason to distinguish the two yet. While a few companies like Atari, Magnavox, and newcomer Coleco pushed the envelope, the market became flooded with simple, similar video games.

Second generation

Home consoles

The Atari 2600 became the most popular game console of the second generation.

Fairchild released the Fairchild Video Entertainment System (VES) in 1976. While there had been previous game consoles that used cartridges, either the cartridges had no information and served the same function as flipping switches (the Odyssey) or the console itself was empty (Coleco Telstar) and the cartridge contained all of the game components. The VES, however, contained a programmable microprocessor so its cartridges only needed a single ROM chip to store microprocessor instructions.

RCA and Atari soon released their own cartridge-based consoles, the RCA Studio II and the Atari 2600 (originally branded as the Atari Video Computer System), respectively.


Atari 2600 Wood 4Sw Set

Handheld game consoles

The first handheld game console with interchangeable cartridges was the Microvision designed by Smith Engineering, and distributed and sold by Milton-Bradley in 1979. Crippled by a small, fragile LCD display and a very narrow selection of games, it was discontinued two years later.

The Epoch Game Pocket Computer was released in Japan in 1984. The Game Pocket Computer featured an LCD screen with 75 X 64 resolution, and could produce graphics at about the same level as early Atari 2600 games. The system sold poorly, and as a result only 5 games were made for it.

Nintendo’s Game & Watch series of dedicated game systems proved more successful. It helped to establish handheld gaming as popular and lasted until 1991. Many Game & Watch games would later be re-released on Nintendo’s subsequent handheld systems.

Rebirth of the home console market

The VES continued to be sold at a profit after 1977, and both Bally (with their Home Library Computer in 1977) and Magnavox (with the Odyssey² in 1978) brought their own programmable cartridge-based consoles to the market. However, it was not until Atari released a conversion of the golden age arcade hit Space Invadersin 1980 for the Atari 2600 that the home console industry took off. Many consumers bought an Atari console so they could play Space Invaders at home. The unprecedented success of Space Invaders started the trend of console manufacturers trying to get exclusive rights to arcade titles, and the trend of advertisements for game consoles claiming to bring the arcade experience home.

Throughout the early 1980s, other companies released video game consoles of their own. Many of the video game systems (e.g. ColecoVision) were technically superior to the Atari 2600, and marketed as improvements over the Atari 2600. However, Atari dominated the console market in the early 1980s.

Video game crash of 1983

In 1983, the video game business suffered a much more severe crash. A flood of consoles, low-quality video games by smaller companies (especially for the 2600), industry leader Atari hyping games such as E.T and a 2600 version of Pac-Man that were poorly received, and a growing number of home computer users caused consumers and retailers to lose faith in video game consoles. Most video game companies filed for bankruptcy, or moved into other industries, abandoning their game consoles. A group of employees from Mattel Electronics formed the INTV Corporation and bought the rights for the Intellivision. INTV alone continued to manufacture the Intellivision in small quantities and release new Intellivision games until 1991. All other North American game consoles were discontinued by 1984.

Third generation

Home consoles

The NES made home console video games popular again in America after the 1983 crash.

In 1983, Nintendo released the Family Computer (or Famicom) in Japan. The Famicom supported high-resolution sprites, larger color palettes, and tiled backgrounds. This allowed Famicom games to be longer and have more detailed graphics. Nintendo began attempts to bring their Famicom to the U.S. after the video game market had crashed. In the U.S., video games were seen as a fad that had already passed. To distinguish its product from older game consoles, Nintendo released their Famicom as the Nintendo Entertainment System (NES) which used a front-loading cartridge port similar to a VCR, included a plastic “robot” (R.O.B.), and was initially advertised as a toy.


NES Console Set

The NES was the highest selling console in the history of North America and revitalized the video game market. Mario ofSuper Mario Bros became a global icon starting with his NES games. Nintendo took an unusual stance with third-party developers for its console. Nintendo contractually restricted third-party developers to three NES titles per year and forbade them from developing for other video game consoles. The practice ensured Nintendo’s market dominance and prevented the flood of trash titles that had helped kill the Atari, but was ruled illegal late in the console’s life cycle.

Sega’s Master System was intended to compete with the NES, but never gained any significant market share in the US or Japan and was barely profitable. It fared notably better in PAL territories. In Europe and South America, the Master System competed with the NES and saw new game releases even after Sega’s next-generation Mega Drive was released. In Brazil where strict importation laws and rampant piracy kept out competitors, the Master System outsold the NES by a massive margin and remained popular into the ’90s.

Jack Tramiel, after buying Atari, downsizing its staff, and settling its legal disputes, attempted to bring Atari back into the home console market. Atari released a smaller, sleeker, cheaper version of their popular Atari 2600. They also released the Atari 7800, a console technologically comparable with the NES and backwards compatible with the 2600. Finally Atari repackaged its 8-bit XE home computer as the XEGS game console. The new consoles helped Atari claw its way out of debt, but failed to gain much market share from Nintendo. Atari’s lack of funds meant that its consoles saw fewer releases, lower production values (both the manuals and the game labels were frequently black and white), and limited distribution.

Handheld game consoles

In the later part of the third generation, Nintendo also introduced the Game Boy, which almost single-handedly solidified and then proceeded to dominate the previously scattered handheld market for 15 years. While the Game Boy product line was incrementally updated every few years, until the Game Boy Micro andNintendo DS, and partially the Game Boy Color, all Game Boy products were backwards compatible with the original released in 1989. Since the Game Boy’s release, Nintendo had dominated the handheld market. Additionally two popular 8-bit computers, the Commodore 64 and Amstrad CPC, were repackaged as theCommodore 64 Games System and Amstrad GX4000 respectively, for entry into the console market.

To Read More Go to the next Page


Cloud Technology (Computing and Connection)

What is the cloud? Where is the cloud? Are we in the cloud now? These are all questions you’ve probably heard or even asked yourself. The term “cloud computing” is everywhere.

In the simplest terms, cloud computing means storing and accessing data and programs over the Internet instead of your computer’s hard drive. The cloud is just a metaphor for the Internet. It goes back to the days of flowcharts and presentations that would represent the gigantic server-farm infrastructure of the Internet as nothing but a puffy, white cumulus cloud, accepting connections and doling out information as it floats.


What cloud computing is not about is your hard drive. When you store data on or run programs from the hard drive, that’s called local storage and computing. Everything you need is physically close to you, which means accessing your data is fast and easy, for that one computer, or others on the local network. Working off your hard drive is how the computer industry functioned for decades; some would argue it’s still superior to cloud computing, for reasons I’ll explain shortly.

The cloud is also not about having a dedicated network attached storage (NAS) hardware or server in residence. Storing data on a home or office network does not count as utilizing the cloud. (However, some NAS will let you remotely access things over the Internet, and there’s at least one brand from Western Digital named “My Cloud,” just to keep things confusing.)

For it to be considered “cloud computing,” you need to access your data or your programs over the Internet, or at the very least, have that data synced with other information over the Web. In a big business, you may know all there is to know about what’s on the other side of the connection; as an individual user, you may never have any idea what kind of massive data processing is happening on the other end. The end result is the same: with an online connection, cloud computing can be done anywhere, anytime.

Consumer vs. Business

Let’s be clear here. We’re talking about cloud computing as it impacts individual consumers—those of us who sit back at home or in small-to-medium offices and use the Internet on a regular basis.

There is an entirely different “cloud” when it comes to business. Some businesses choose to implement Software-as-a-Service (SaaS), where the business subscribes to an application it accesses over the Internet. (Think Salesforce.com.) There’s also Platform-as-a-Service (PaaS), where a business can create its own custom applications for use by all in the company. And don’t forget the mighty Infrastructure-as-a-Service (IaaS), where players like Amazon, Microsoft, Google, and Rackspace provide a backbone that can be “rented out” by other companies. (For example, Netflix provides services to you because it’s a customer of the cloud services at Amazon.)

Of course, cloud computing is big business: The market generated $100 billion a year in 2012, which could be $127 billion by 2017 and $500 billion by 2020.

Cloud computing promises several attractive benefits for businesses and end users. Three of the main benefits of cloud computing include:

  • Self-service provisioning: End users can spin up computing resources for almost any type of workload on-demand.
  • Elasticity: Companies can scale up as computing needs increase and then scale down again as demands decrease.
  • Pay per use: Computing resources are measured at a granular level, allowing users to pay only for the resources and workloads they use.


Cloud Structure

Cloud computing services can be private, public or hybrid.

Private cloud services are delivered from a business’ data center to internal users. This model offers versatility and convenience, while preserving management, control and security. Internal customers may or may not be billed for services through IT chargeback.

In the public cloud model, a third-party provider delivers the cloud service over the Internet. Public cloud services are sold on-demand, typically by the minute or the hour. Customers only pay for the CPU cycles, storage or bandwidth they consume.  Leading public cloud providers include Amazon Web Services (AWS), Microsoft Azure, IBM/SoftLayer and Google Compute Engine.

Hybrid cloud is a combination of public cloud services and on-premises private cloud – with orchestration and automation between the two. Companies can run mission-critical workloads or sensitive applications on the private cloud while using the public cloud for bursty workloads that must scale on-demand. The goal of hybrid cloud is to create a unified, automated, scalable environment which takes advantage of all that a public cloud infrastructure can provide, while still maintaining control over mission-critical data.

Although cloud computing has changed over time, it has always been divided into three broad service categories: infrastructure as a service (IaaS), platform as a service (PaaS) and software as service (SaaS).

IaaS providers such as AWS supply a virtual server instance and storage, as well as application program interfaces (APIs) that let users migrate workloads to a virtual machine (VM). Users have an allocated storage capacity and start, stop, access and configure the VM and storage as desired. IaaS providers offer small, medium, large, extra-large, and memory- or compute-optimized instances, in addition to customized instances, for various workload needs.

In the PaaS model, providers host development tools on their infrastructures. Users access those tools over the Internet using APIs, Web portals or gateway software. PaaS is used for general software development and many PaaS providers will host the software after it’s developed. Common PaaS providers include Salesforce.com’s Force.com, Amazon Elastic Beanstalk and Google App Engine.

SaaS is a distribution model that delivers software applications over the Internet; these are often called Web services. Microsoft Office 365 is a SaaS offering for productivity software and email services. Users can access SaaS applications and services from any location using a computer or mobile device that has Internet access. 

Cloud clients

Users access cloud computing using networked client devices, such as desktop computers, laptops, tablets and smartphones and any Ethernet enabled device such as Home Automation Gadgets. Some of these devices—cloud clients—rely on cloud computing for all or a majority of their applications so as to be essentially useless without it. Examples are thin clients and the browser-based Chromebook. Many cloud applications do not require specific software on the client and instead use a web browser to interact with the cloud application. With Ajax and HTML5 these Web user interfaces can achieve a similar, or even better, look and feel to native applications. Some cloud applications, however, support specific client software dedicated to these applications (e.g., virtual desktop clients and most email clients). Some legacy applications (line of business applications that until now have been prevalent in thin client computing) are delivered via a screen-sharing technology.


The biggest question most have with Cloud Computing is will it be Safe?

Answer: No

Reason why is everything that Cloud Computing is based on is mechanical, although it seems virtual. The Safety of the data (information), is only as Safe as the will and determination of the individual that wants to have at it.

Common Cloud Examples

The lines between local computing and cloud computing sometimes get very, very blurry. That’s because the cloud is part of almost everything on our computers these days. You can easily have a local piece of software (for instance, Microsoft Office 365) that utilizes a form of cloud computing for storage (Microsoft OneDrive).

That said, Microsoft also offers a set of Web-based apps, Office Online, that are Internet-only versions of Word, Excel, PowerPoint, and OneNote accessed via your Web browser without installing anything. That makes them a version of cloud computing (Web-based=cloud).


Office Online

Some other major examples of cloud computing you’re probably using:

Google Drive:

This is a pure cloud computing service, with all the storage found online so it can work with the cloud apps: Google Docs, Google Sheets, and Google Slides. Drive is also available on more than just desktop computers; you can use it on tablets like the iPad or on smartphones, and there are separate apps for Docs and Sheets, as well. In fact, most of Google’s services could be considered cloud computing: Gmail, Google Calendar, Google Maps, and so on.

Apple iCloud:

Apple’s cloud service is primarily used for online storage, backup, and synchronization of your mail, contacts, calendar, and more. All the data you need is available to you on your iOS, Mac OS, or Windows device (Windows users have to install the iCloud control panel). Naturally, Apple won’t be outdone by rivals: it offers cloud-based versions of its word processor (Pages), spreadsheet (Numbers), and presentations (Keynote) for use by any iCloud subscriber. iCloud is also the place iPhone users go to utilize the Find My iPhone feature that’s all important when the handset goes missing.

Amazon Cloud Drive:

Storage at the big retailer is mainly for music, preferably MP3s that you purchase from Amazon, and images—if you have Amazon Prime, you get unlimited image storage. Amazon Cloud Drive also holds anything you buy for the Kindle. It’s essentially storage for anything digital you’d buy from Amazon, baked into all its products and services.

Hybrid services like Box, Dropbox, and SugarSync

all say they work in the cloud because they store a synced version of your files online, but they also sync those files with local storage. Synchronization is a cornerstone of the cloud computing experience, even if you do access the file locally.

Likewise, it’s considered cloud computing if you have a community of people with separate devices that need the same data synced, be it for work collaboration projects or just to keep the family in sync. For more, check out the The Best Cloud Storage and File-Syncing Services for 2016.

Cloud Hardware

Right now, the primary example of a device that is completely cloud-centric is the Chromebook. These are laptops that have just enough local storage and power to run the Chrome OS, which essentially turns the Google Chrome Web browser into an operating system. With a Chromebook, most everything you do is online: apps, media, and storage are all in the cloud.


Chrome Book

Or you can try a  ChromeBit, a smaller-than-a-candy-bar drive that turns any display with an HDMI port into a usable computer running Chrome OS.

Of course, you may be wondering what happens if you’re somewhere without a connection and you need to access your data. This is currently one of the biggest complaints about Chrome OS, although its offline functionality (that is, non-cloud abilities) are expanding.

The Chromebook isn’t the first product to try this approach. So-called “dumb terminals” that lack local storage and connect to a local server or mainframe go back decades. The first Internet-only product attempts included the old NIC (New Internet Computer), the Netpliance iOpener, and the disastrous 3Com Ergo Audrey (pictured). You could argue they all debuted well before their time—dial-up speeds of the 1990s had training wheels compared to the accelerated broadband Internet connections of today. That’s why many would argue that cloud computing works at all: the connection to the Internet is as fast as the connection to the hard drive. (At least it is for some of us.)

Arguments Against the Cloud

In a 2013 edition of his feature What if?, xkcd-cartoonist (and former NASA roboticist) Randall Monroe tried to answer the question of “When—if ever—will the bandwidth of the Internet surpass that of FedEx?” The question was posed because no matter how great your broadband connection, it’s still cheaper to send a package of hundreds of gigabytes of data via Fedex’s “sneakernet” of planes and trucks than it is to try and send it over the Internet. (The answer, Monroe concluded, is the year 2040.)

Cory Doctorow over at boingboing took Monroe’s answer as “an implicit critique of cloud computing.” To him, the speed and cost of local storage easily outstrips using a wide-area network connection controlled by a telecom company (your ISP).


Steve Wozniak

That’s the rub. The ISPs, telcos, and media companies control your access. Putting all your faith in the cloud means you’re also putting all your faith in continued, unfettered access. You might get this level of access, but it’ll cost you. And it will continue to cost more and more as companies find ways to make you pay by doing things like metering your service: the more bandwidth you use, the more it costs.

Steve WozniakMaybe you trust those corporations. That’s fine, but there are plenty of other arguments against going into the cloud whole hog. Apple co-founder Steve Wozniak decried cloud computing in 2012, saying: “I think it’s going to be horrendous. I think there are going to be a lot of horrible problems in the next five years.”

In part, that comes from the potential for crashes. When there are problems at a company like Amazon, which provides cloud storage services to big name companies like Netflix and Pinterest, it can take out all those services (as happened in the summer of 2012). In 2014, outages afflicted Dropbox, Gmail, Basecamp, Adobe, Evernote, iCloud, and Microsoft; in 2015 the outtages hit Apple, Verizon, Microsoft, AOL, Level 3, and Google. Microsoft had another this year. The problems typically last for just hours.

Wozniak was concerned more about the intellectual property issues. Who owns the data you store online? Is it you or the company storing it? Consider how many times there’s been widespread controversy over the changing terms of service for companies like Facebook and Instagram—which are definitely cloud services—regarding what they get to do with your photos. There’s also a difference between data you upload, and data you create in the cloud itself—a provider could have a strong claim on the latter. Ownership is a relevant factor to be concerned about.

After all, there’s no central body governing use of the cloud for storage and services. The Institute of Electrical and Electronics Engineers (IEEE) is trying. It created an IEEE Cloud Computing Initiative in 2011 to establish standards for use, especially for the business sector. The Supreme Court ruling against Aereo could have told us a lot about copyright of files in the cloud… but the court side-stepped the issue to keep cloud computing status quo.

Cloud computing—like so much about the Internet—is a little bit like the Wild West, where the rules are made up as you go, and you hope for the best.

The future

Cloud computing is therefore still as much a research topic, as it is a market offering. What is clear through the evolution of cloud computing services is that the chief technical officer (CTO) is a major driving force behind cloud adoption. The major cloud technology developers continue to invest billions a year in cloud R&D; for example: in 2011 Microsoft committed 90% of its US$9.6bn R&D budget to its cloud. Centaur Partners also predict that SaaS revenue will grow from US$13.5B in 2011 to $32.8B in 2016. This expansion also includes Finance and Accounting SaaS. Additionally, more industries are turning to cloud technology as an efficient way to improve quality services due to its capabilities to reduce overhead costs, downtime, and automate infrastructure deployment.

  • references


Tech Target



  • Last Update 19 June 2016

-Edit Captions

, , ,

3D Technologies

3D Printers

3D printers might look like something from the future, but they already serve as fundamental tools for many industries. These machines use an additive process to create functional objects from digital files. Their versatile extruders can lay out intricate designs for use in all sorts of situations.


How Does a 3D Printer Work: The Basics

The printing process can be understood as a few simple steps. It starts with a digital designer. This person creates the blueprint for the project. Once this happens, the 3D printer uses the digital design as a guide. The machine pushes molten plastic through an extruder and layer by layer the object takes shape. When it finishes, the designer can pry the finished prototype from the build plate and clean it up.

How Does a 3D Printer Work: Layers

3D printers rely on computer-aided design (CAD) software to determine the shape and size of a print. Once created, the 3D file goes through a digital slicing process, which cuts the model into printable layers. These printers then use this sliced layer information to determine how much material to extrude and where exactly the material needs to go. They extrude the patterns one layer at a time until the 3D print finishes building.

This additive-build model remains the most common style of 3D printing available, but stereo lithography printers produce similar results. With stereo lithography, the printer controls exposure to a light-sensitive material, solidifying one layer at a time.


How Does a 3D Printer Work: Print Speed

This building process can take many hours, no matter the style of 3D printer you use. The speed of the printing process depends on the size and complexity of the print. 3D printing software controls the density of the object, and most models use a honeycomb pattern to fill the interior of the print. This process doesn’t require nearly as much filament as full, flat layers and increases the object’s overall print speed.

The density of the interior is not the only factor to consider when creating a 3D file. When you slice a 3D design, you can choose the thickness of the print layers, also referred to as print resolution. Thick layers print faster than thin layers, but they also result in a blockier look for the finished print. Thin layers allow the 3D printer to create much smoother prints, but they can take significantly longer to finish.

What Materials Do 3D Printers Use?

Nowadays, you can purchase plastic filament online. While many consumer 3D printers use some form of plastic, industrial printers can extrude and manipulate many other materials such as wood, nylon, copper and various high-quality plastics. The availability of these alternate materials allows for immense versatility. Some industrial 3D printers even extrude wax, which melts away when cast with metal. Some industrial 3D printers even sinter metal powder into rigid structures.

This wide variety of workable materials ensures that 3D printers work as valuable assets for all sorts of projects. Industries that rely on highly specialized machine parts use them to create replacements and those that rely on working prototypes or short-run products use 3D printers for simple manufacturing.

Now let’s take a look at some of the more popular technologies behind 3D printing:

Fused Deposition Modeling (FDM)/Fused Filament Fabrication (FFF):

Fused Deposition Modeling (FDM) was invented by a man named S. Scott Crump a few years after Chuck Hull initially invented 3D printing. Crump went on to commercialize the technology in 1990 via Stratasys, which actually has a trademark on the term. This is why the same general technology is often referred to as Fused Filament Fabrication (FFF).32

Basically the way in which this technology works is rather simple, and this is the reason why 95% of all desktop 3D printers found within homes and garages today utilize FDM/FFF. A thermoplastic such as PLA or ABS is fed into an extruder and through a hotend. The hotend then melts the plastic, turning it into a gooey liquid. The printer then acquires its instructions from the computer via G-code and deposits the molten plastic layer by layer until an entire object is fabricated. The plastic melts rather rapidly, providing a solid surface for each additional layer’s deposition.

Depending on the maximum temperature of the hotend as well as other variables, numerous other materials besides ABS and PLA may be used, including composites of both materials, nylon, and more.

Stereo lithography (SLA):

As we’ve mentioned above, this was the very first 3D printing technology to be invented in 1986. With 3D Systems holding many of the patents involving this technology, which are in the process of 33expiring over the next few years, there has not been a tremendous amount of competition within the market. This means that the technology has been overpriced and used less often than the FDM/FFF alternative has.

Instead of extruding a material out of a hotend, the SLA process works with a laser or DLP projector combined with a photosensitive resin. Objects are printed in a vat of resin as a laser or other lighting source like a projector slowly cures (hardens) the resin layer-by-layer as the object is formed. Typically SLA machines are able to achieve far better accuracy and less of a layered appearance than FDM/FFF technology can.

Selective Laser Sintering (SLS)/Selective Laser Melting (SLM)/Direct Metal Laser Sintering (DMLS):

All three of these technologies are very similar, yet have marked differences. We’ve found that many individuals use the terms interchangeably when, in fact, there are reasons to use one method over the others. Both Selective Laser Sintering (SLS) and Direct Metal Laser Sintering (DMLS) are in fact the same technology. The 34difference in terminology is based on the materials used. DMLS specifically refers to the layer-by-layer sintering of metal powders using a laser beam, while SLS is simply the same process but with non-metal materials such as plastics, ceramics, glass, etc. Both DMLS and SLS do not fully melt the materials, instead sintering them or fusing them together at the molecular level. When dealing with metals, DMLS is ideal for metal alloys, as the molecules have varying melting points, meaning a full melt can sometimes be difficult to achieve.

On the other hand, when dealing with metals consisting of one material, for instance titanium, Selective Laser Melting (SLM) is the way to go as a laser is able to completely melt the molecules together. All three processes are currently expensive, and out of the budgets of most individuals and even small businesses because of the high powered laser beams that are required. Additionally safety precautions must be taken, meaning additional expenses on the part of the user.


A technology invented by the Israeli company Objet, which merged with Stratasys back in 2012, PolyJetting incorporates elements of both inkjet 2D printing and the Stereolithography process. Basically, inkjet nozzles spray a a liquid photosensitive resin onto a build platform in a similar fashion as ink is sprayed onto a piece of paper during a typical 2D printing process. Immediately following the ejection of the material a UV light source is introduced to the material, quickly curing it before the next layer of photosensitive liquid is sprayed on top. The process repeats until an entire object is fabricated. Stratasys currently uses such technologies within their popular Connex family of machines.

Plaster Based 3D Printing (PP)

This is a process requiring the use of two different materials: a powder material (gypsum plaster, starch, etc.) that sits on a print bed, along with a binding ink which is ejected from a nozzle similar35 to that of an inkjet printer onto the bed of powder, hardening it. Once one layer of powder is binded, a rake-like instrument sifts additional powder over that layer and the process continues until an entire object is fabricated. This technology was originally invented at MIT in the early ’90s before being commercialized in 1995 by a company called Z Corporation, which was acquired in 2011 by 3D Systems for $137 million.


Every week it seems as though new approaches are presented for 3D printing. There are new technologies which have recently been unveiled like that of HP’s Multi Jet Fusion, as well as Carbon3D’s CLIP technology. As we move into the next several years it will be interesting to see which such technologies take hold and which may fall by the wayside.

Read More in next page

Operating System (Pro. choosing)


Operating SystemsOS

An operating System (OS) is an intermediary between users and computer hardware. It provides users an environment in which a user can execute programs conveniently and efficiently.

In technical terms, It is a software which manages hardware. An operating System controls the allocation of resources and services such as memory, processors, devices and information.

Following are some of important functions of an operating System:

  • Memory Management
  • Processor Management
  • Device Management
  • File Management
  • Security
  • Control over system performance
  • Job accounting
  • Error detecting aids
  • Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main memory is a large array of words or bytes where each word or byte has its own address.

Main memory provides a fast storage that can be access directly by the CPU. So for a program to be executed, it must in the main memory. Operating System does the following activities for memory management.

  • Keeps tracks of primary memory i.e. what part of it are in use by whom, what part are not in use.
  • In multiprogramming, OS decides which process will get memory when and how much.
  • Allocates the memory when the process requests it to do so.
  • De-allocates the memory when the process no longer needs it or has been terminated.

Processor Management

In multiprogramming environment, OS decides which process gets the processor when and how much time. This function is called process scheduling. Operating System does the following activities for processor management.

  • Keeps tracks of processor and status of process. Program responsible for this task is known as traffic controller.
  • Allocates the processor(CPU) to a process.
  • De-allocates processor when processor is no longer required.

Device Management

OS manages device communication via their respective drivers. Operating System does the following activities for device management.

  • Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
  • Decides which process gets the device when and for how much time.
  • Allocates the device in the efficient way.
  • De-allocates devices.

File Management

A file system is normally organized into directories for easy navigation and usage. These directories may contain files and other directions. Operating System does the following activities for file management.

  • Keeps track of information, location, uses, status etc. The collective facilities are often known as file system.
  • Decides who gets the resources.
  • Allocates the resources.
  • De-allocates the resources.

Other Important Activities

Following are some of the important activities that Operating System does.

  • Security— By means of password and similar other techniques, preventing unauthorized access to programs and data.
  • Control over system performance— Recording delays between request for a service and response from the system.
  • Job accounting— Keeping track of time and resources used by various jobs and users.
  • Error detecting aids— Production of dumps, traces, error messages and other debugging and error detecting aids.
  • Coordination between other softwares and users— Coordination and assignment of compilers, interpreters, assemblers and other software to the various users of the computer systems.

The Classification of Operating systems

  • Multi-user: Allows two or more users to run programs at the same time. Some operating systems permit hundreds or even thousands of concurrent users.
  • Multiprocessing : Supports running a program on more than one CPU.
  • Multitasking : Allows more than one program to run concurrently.
  • Multithreading : Allows different parts of a single program to run concurrently.
  • Real time: Responds to input instantly. General-purpose operating systems, such as DOS and UNIX, are not real-time.

Operating systems provide a software platform on top of which other programs, called application programs, can run. The application programs must be written to run on top of a particular operating system. Your choice of operating system, therefore, determines to a great extent the applications you can run. For PCs, the most popular operating systems are Mac OS X, and Windows, but others are available, such as Linux.


The GUI-based OS was introduced in1985 and has been released in many versions since then, as described below.

Microsoft got its start with the partnership of Bill Gates and Paul Allen in 1975. Gates and Allen co-developed Xenix (a version of Unix) and also collaborated on a BASIC interpreter for the Altair 8800. The company was incorporated in 1981.

2001: Windows XP

Windows XP was released as the first NT-based system with a version aimed squarely at the home user. XP was rated highly by both users and critics. The system improved Windows’ appearance with themes, and offered a stable platform. XP was also the end of gaming in DOS, for all intents and purposes. Direct X enabled features in 3D gaming that OpenGL had trouble keeping up with at times. Future versions of Windows would be compared to XP for gaming performance for some time. XP offered the first Windows support for 64-bit computing. However, 64-bit computing was not very well supported in XP, and also lacked drivers or much software to run.


As it turned out, Windows XP was one of the most popular versions. In combination with the unpopularity of the upcoming Vista system, that would eventually lead to update-related problems.

2006: Windows Vista


Windows Vista was a highly hyped release that spent a lot of developmental and computer resources on appearance. The dedication of resources might have resulted from the fact that XP was starting to look archaic in comparison to Mac OS. Vista had interesting visual effects but was slow to start and run. The 32-bit version in particular didn’t enable enough RAM for the memory-hungry OS to operate quickly. Users still timid to embrace 64-bit missed out on a marginally better experience, offered along with investment in more than 4GB of RAM. Gamers found the added exclusive features in Direct X 10 only mildly tempting compared to XP’s speed. Licensing rights and Windows activation became stricter, while user control of internal workings became less accessible. Microsoft lost market share in this time to Apple and Linux variants alike. Vista’s flaws — coupled with the fact that many older computers lacked the resources to run the system — led to many home and business users staying with XP rather than updating. That situation was to become problematic when Microsoft announced that XP end of life would occur in April 2014.


2009: Windows 7

Windows 7 is built on the Vista kernel. Windows 7 had the visuals of Vista with better start up and program speed. It was easier on memory and more reliable. To many end users, the biggest changes between Vista and Windows 7 are faster boot times, new user interfaces and the addition of Internet Explorer 8.


The system plays games almost as well as XP. With true 64-bit support and an increasing separation in Direct X features that were not implemented in XP, that small performance difference benefit was further eroded. Windows 7 became the most used operating system on the Internet and also the most used for PC gaming.

2012: Windows 8


Windows 8 was released with a number of enhancements and the new Metro UI. Windows 8 takes better advantage of multi-core processing, solid state drives (SSD), touch screens and other alternate input methods. However users found it awkward, like switching between an interface made for a touch screen and one made for a mouse — with neither one entirely suited to the purpose. Generally Windows 7 retained market leadership. Even after Microsoft’s UI and other updates in 8.1, Windows 8 trailed not just 7 but XP in user numbers into 2014.

2015: Windows 10


Microsoft announced Windows 10 in September 2014, skipping Windows 9. Version 10 includes the start menu, which was absent from Windows 8. A responsive design feature called Continuum adapts the interface depending on whether the touch screen or keyboard and mouse are being used for input. New features like an on-screen back button simplify touch input. The OS is designed to have a consistent interface across user devices including PCs, laptops, phones and tablets.


Windows 10 beefs up Snap, the function that lets you quickly arrange apps side by side, with a new quadrant layout that lets you split your display up among up to four apps. There’s also support for multiple virtual desktops (finally), so you can keep all your work apps in one place and quickly slide back to the desktop with your blogs and Reddit once your boss walks away. And then there’s the task view button that lives on the taskbar. Click it, and you’ll get a quick look at all of your open files, windows, and desktops.


As if bringing the Start Menu back weren’t enough, Microsoft has built its personal voice assistant Cortana right in. Even if you’re already using Google Now or Siri, having Cortana on your desktop can be handy. You can perform web searches to get many of the same quick answers by simply pressing the Win key and typing a question like “How many ounces are in a cup” or “What’s the weather like?”


Being able to run a few apps at once is the great benefit of an operating system like Windows. Running too many, though, can get overwhelming. Now, Microsoft is finally adding the ability to create and manage multiple desktops. You can add new desktops, quickly move windows between them, and jump between desktops by pressing Win-Tab. This may not be all that useful for average users, but those of us who do a lot of work with our machines will appreciate the feature.

Wi-Fi Sense

I’d be remiss if I didn’t mention Wi-Fi Sense. While technically not a new feature (it’s part of Windows Phone 8.1) its presence in Windows 10 should’ve been a welcome addition: Wi-Fi Sense connects your devices to trusted Wi-Fi hotspots.

I love the idea. Automatically sharing Wi-Fi credentials with my friends would remove much of the hassle of most social gatherings, when people just want to jump on my Wi-Fi network. And — this part is key — Wi-Fi Sense doesn’t share your actual password, so it theoretically eases a social transaction (the sharing of Wi-Fi connectivity) without necessarily compromising my network security.

Read More in next page

, ,

Antivirus Compare (Windows and Android)


Comparing Windows’s Antivirus

  • Antivirus

  • Product

  • Protection
  • Performance
  • Usability
  • Ease of Scanning
  • Resource Use
  • First Quick Scan (Min)
  • Average Full Scan
  • Bitdefender

  • Internet Security

  • 98
  • 100
  • 100
  • 100
  • 91
  • 1.75
  • 60
  • Kaspersky

  • Total Security

  • 96
  • 97
  • 100
  • 75
  • 95
  • 2
  • 61
  • BullGuard

  • Antivirus

  • 96
  • 90
  • 90
  • 92
  • 98
  • 1
  • 56
  • McAfee

  • LiveSafe

  • 95
  • 90
  • 100
  • 67
  • 100
  • 6
  • 56
  • F-Secure

  • Anti-Virus

  • 96
  • 92
  • 92
  • 83
  • 85
  • 1
  • 23
  • Avira

  • Internet Security Suite

  • 98
  • 80
  • 100
  • 67
  • 90
  • 0.5
  • 63
  • Trend Micro

  • Internet Security

  • 96
  • 80
  • 95
  • 92
  • 85
  • 1
  • 45
  • Avast

  • Free Antivirus

  • 96
  • 85
  • 100
  • 58
  • 85
  • 18
  • 58
  • AhnLab

  • Security

  • 90
  • 85
  • 90
  • 0
  • 0
  • 0
  • 0
  • AVG

  • Free Antivirus

  • 96
  • 75
  • 100
  • 75
  • 89
  • 0
  • 30
  • Panda

  • Free Antivirus

  • 94
  • 75
  • 95
  • 75
  • 93
  • 8
  • 57
  • Symantec

  • Norton Security

  • 95
  • 80
  • 95
  • 58
  • 88
  • 3.75
  • 41
  • Qihoo 360

  • Total Security

  • 94
  • 65
  • 100
  • 0
  • 0
  • 0
  • 0
  • eScan

  • Anti-Virus

  • 87
  • 75
  • 95
  • 67
  • 85
  • 1.5
  • 35
  • ESET

  • NOD32 Antivirus

  • 87
  • 58
  • 100
  • 67
  • 99
  • 0
  • 53
  • G Data

  • Internet Security

  • 92
  • 50
  • 100
  • 83
  • 95
  • 5
  • 43
  • Emsisoft

  • Anti-Malware

  • 89
  • 60
  • 90
  • 0
  • 0
  • 0
  • 0
  • Windows

  • Windows Defender

  • 65
  • 85
  • 90
  • 0
  • 0
  • 0
  • 0


The term “antivirus software” stems from the early days of computer viruses, in which programs were created to remove viruses and prevent them from spreading. However, over the years, different types of malicious software, often called malware, emerged as threats to personal and work computers worldwide. “Malware” is an umbrella term to describe several different kinds of malicious programs, including computer viruses.

Although antivirus software evolved to combat new malware, the term “antivirus” stuck, even though the term anti malware is truer to the software’s capabilities. To give you an idea of the different types of malware out there, we’ve identified malware types that are potential threats to computer systems today.




These malicious programs are designed to replicate themselves quickly with the intent to spread to other computers, often through a computer network. Although they may not be designed to intentionally impair computer systems, worms generally do some sort of damage or harm to the network itself by consuming bandwidth, at the very least. Most worms are designed only to spread as quickly as possible, so they may not try to change the computer systems they pass through. However, worms have been and are capable of creating backdoor security vulnerabilities, deleting files or even sending files via email. This is a common method for spam senders to spread junk email quickly, as the more computers the worms infect, the faster the spam mail spreads.

Trojan Horses

Trojan horses, or Trojans for short, are different from worms in that they are not designed to replicate themselves. Rather, Trojans are designed to trick you into downloading and executing them to cause data loss, theft and sometimes total-system harm. Just as in the ancient Greek story of the wooden horse designed to deceive the soldiers of Troy, Trojans present themselves as useful, interesting or routine programs to trick you into installing them on your computer.


This software is designed to gather information about you without your knowledge. This information can be sent to another party without your consent, and in some rather malicious cases, it can even be used to take control over a computer. Spyware is capable of collecting any type of data, including your internet history and banking information. Some forms of spyware can install additional software or change your internet or browser settings, which can be a mere annoyance or a problem that can take days to fix.

Ransom ware

This incarnation of malware infects your computer with the intention of restricting access to your computer system, perhaps preventing you from surfing the internet or accessing the hard drive, and then demanding a payment to the malware creators. The trouble with this software is that it tries to imitate the look of genuine, trusted software to trick you into buying a solution. For example, some forms of ransom ware tell you that your user license for a particular application has expired and you need to repurchase the license. Some of the trickiest ransom ware creators have acquired millions of dollars from unsuspecting users.


Rootkits are stealthy types of malware that attempt to hide from typical methods of detection and allow continued privileged access to a computer. This essentially means that the rootkit attempts to gain administrator access on your computer and then hides itself so you don’t know it is on your system. This type of malware is generally difficult to detect and remove because it tries to embed itself thoroughly and deeply into your computer’s system.

Malware is not limited to these five examples, but this gives you a sense of how malicious and vicious malware can be. Fortunately, antivirus software is designed to combat these threats by preventing the programs from entering your system and quarantining and removing any malware that does get through. The best way to protect yourself from malware is to update your computer system when prompted and to purchase third-party antivirus software that protects your computer 24/7.



Is Antivirus Software Necessary?

You may ask yourself why you need antivirus software when your computer comes with, or makes readily available, free antivirus software found in Windows Defender and Microsoft Security Essentials. Windows Defender has been available since the Windows Vista days as an antispyware program, designed to monitor and detect programs that try to gather information about you without your knowledge. With the release of Windows 8, Windows Defender was upgraded to offer additional antivirus protection features. Windows 10 comes with Windows Defender built in to the operating system itself. Microsoft Security Essentials offers antivirus protection against viruses, spyware, Trojans and rootkits, and it is available on Windows XP, Vista and 7 but not on Windows 8.

With these protections in place, why would you ever need to download a third-party antivirus program? The answer is performance.

Although Windows Defender and Microsoft Security Essentials offer built-in support and Microsoft continues to improve upon these systems, they don’t generate high scores in the tests conducted by AV-Test, the respected independent antivirus software test lab. This is expected to a point, as these programs were designed as a baseline of protection for users who don’t plan on purchasing commercial antivirus protection. However, AV-Test regularly publishes its test results comparing both Windows Defender and Microsoft Security Essentials to the top third-party antivirus programs. The third-party systems score higher every single time.

So, is antivirus software necessary when Windows already has built-in protections against viruses? Although baseline virus protections can give you some sense of security, you want the top-performing antivirus programs to make sure you are always protected. AV-Test tests two separate categories of malware or virus interception: the detection of widespread and prevalent viruses and the detection of zero-day, or brand-new, malware attacks. In both categories of tests, Windows Defender and Microsoft Security Essentials performed poorer than the industry standard, regardless of which Windows operating system was used during the tests. This means that third-party antivirus programs are more capable of protecting your computer, and you, against malicious virus attacks.

Free vs. Paid Antivirus Software

As you search for the best antivirus software, you’re going to run into free software that claims it is as capable as paid programs. If you were to rank antivirus software categories into three different tiers, you’d find free software in the bottom tier, with the least functionality and protection. Although free software can be enticing, free antivirus protection is not as capable as paid software. All free programs can scan for viruses, but only some of them scan for malware automatically and offer real-time protection or browser add-ons to help you avoid bad links. Most advanced features are limited to paid antivirus programs.

One annoyance of free antivirus software is that each program displays ads for the full, paid version of the product. This doesn’t detract from the free version’s performance or capability, but it can be distracting and annoying. Some programs even immediately launch your web browser and link to their company’s website if you click on a feature that isn’t available in the free version, which may be minor but annoying nonetheless.

Perhaps the biggest frustration with even the best free antivirus programs is the general lack of support offered by the developers. Paid programs generally offer extensive technical support, allowing you to contact the manufacturer via email, phone and live chat. Free programs generally leave you fending for yourself with user manuals or a knowledgebase in which you have to comb through information before you find helpful material specific to you.

Choosing the Best Antivirus for Your PC

Perhaps the most confusing part of shopping for antivirus software is finding the best program for your needs. Besides free programs, there are generally three recognized tiers of virus protection: antivirus software, internet security suites and premium security suites. Antivirus software is the lowest tier and is regarded as entry-level viral protection. Internet security suites, the next tier of protection, offer more functionality with further protections, such as firewalls and antispam tools. The top tier of virus protection is premium security suites, which are comprehensive tools to help you protect your system from the most aggressive malware with a variety of measures and protections.

Read More in next page