SQLoften referred to as Structured Query Language is a programming language designed for managing data in relational database management systems (RDBMS).
Originally based upon relational algebra and tuple relational calculus, its scope includes data insert, query, update and delete, schema creation and modification, and data access control.
SQL was one of the first commercial languages for Edgar F. Codd's relational model, as described in his influential 1970 paper, "A Relational Model of Data for Large Shared Data Banks". Despite not adhering to the relational model as described by Codd, it became the most widely used database language. Though often described as, and to a great extent is a declarative language, SQL also includes procedural elements. SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standards (ISO) in 1987. Since then the standard has been enhanced several times with added features. However, issues of SQL code portability between major RDBMS products still exist due to lack of full compliance with, or different interpretations of the standard. Among the reasons mentioned are the large size, and incomplete specification of the standard, as well as vendor lock-in.
SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F. Boyce in the early 1970s. This version, initially called SEQUEL (Structured English Query Language), was designed to manipulate and retrieve data stored in IBM's original quasi-relational database management system, System R, which a group at IBM San Jose Research Laboratory had developed during the 1970s. The acronym SEQUEL was later changed to SQL because "SEQUEL" was a trademark of the UK-based Hawker Siddeley aircraft company.
The first Relational Database Management System (RDBMS) was RDMS, developed at MIT in the early 1970s, soon followed by Ingres, developed in 1974 at U.C. Berkeley. Ingres implemented a query language known as QUEL, which was later supplanted in the marketplace by SQL.
In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the potential of the concepts described by Codd, Chamberlin, and Boyce and developed their own SQL-based RDBMS with aspirations of selling it to the U.S. Navy, Central Intelligence Agency, and other U.S. government agencies. In June 1979, Relational Software, Inc. introduced the first commercially available implementation of SQL, Oracle V2 (Version2) for VAX computers. Oracle V2 beat IBM's August release of the System/38 RDBMS to market by a few weeks.
After testing SQL at customer test sites to determine the usefulness and practicality of the system, IBM began developing commercial products based on their System R prototype including System/38, SQL/DS, and DB2, which were commercially available in 1979, 1981, and 1983, respectively.
2. Photosensitive Seizures
Photosensitive epilepsy (PSE) is a form of epilepsy in which seizures are triggered by visual stimuli that form patterns in time or space, such as flashing lights, bold, regular patterns, or regular moving patterns.
Persons with PSE experience epileptiform seizures upon exposure to certain visual stimuli. The exact nature of the stimulus or stimuli that triggers the seizures varies from one patient to another, as does the nature and severity of the resulting seizures (ranging from brief absence seizures to full tonic–clonic seizures). Many PSE patients experience an “aura”or feel odd sensations before the seizure occurs, and this can serve as a warning to a patient to move away from the trigger stimulus.
The visual trigger for a seizure is generally cyclic, forming a regular pattern in time or space. Flashing lights or rapidly changing or alternating images (as in clubs, around emergency vehicles, in action movies or television programs, etc.) are an example of patterns in time that can trigger seizures, and these are the most common triggers. Static spatial patterns such as stripes and squares may trigger seizures as well, even if they do not move. In some cases, the trigger must be both spatially and temporally cyclic, such as a certain moving pattern of bars.
Several characteristics are common in the trigger stimuli of many PSE patients. The patterns are usually high in luminance contrast (bright flashes of light alternating with darkness, or white bars against a black background). Contrasts in color alone (without changes in luminance) are rarely triggers for PSE. Some patients are more affected by patterns of certain colors than by patterns of other colors. The exact spacing of a pattern in time or space is important and varies from one individual to another: a patient may readily experience seizures when exposed to lights that flash seven times per second, but may be unaffected by lights that flash twice per second or twenty times per second. Stimuli that fill the entire visual field are more likely to cause seizures than those that appear in only a portion of the visual field. Stimuli perceived with both eyes are usually much more likely to cause seizures than stimuli seen with one eye only (which is why covering one eye may allow patients to avoid seizures when presented with visual challenges). Some patients are more sensitive with their eyes closed; others are more sensitive with their eyes open.
Sensitivity is increased by alcohol consumption, sleep deprivation, illness, and other forms of stress.
The first case of epileptiform seizures related to a video game was reported in 1981. Since then, "many cases of seizures triggered by VGs were reported, not only in photosensitive, but also in nonphotosensitive children and adolescents with epilepsy.....Specific preventive measures concerning the physical characteristics of images included in commercially available VGs (flash rate, choice of colors, patterns, and contrast) can lead in the future to a clear decrease of this problem." Risks can be reduced through measures such as keeping a safe distance away from the screen (at least 2 meters).
While computer displays in general present very little risk of producing seizures in PSE patients
(much less risk than that presented by television sets), video games with rapidly changing images or highly regular patterns can produce seizures, and video games have increased in importance as triggers as they have become more common. Some people with no prior history of PSE may first experience a seizure while playing a video game. Often the sensitivity is very specific, e.g., it may be a specific scene in a specific game that causes seizures, and not any other scenes. Despite this, there are questions on the dangers of this, and calls for testing all video games for causing PSE. Laws requiring PSE warnings be displayed on packages and/or stores have been proposed  and legal firms are keeping an eye on developments.
3. high-level programming language
A high-level programming language is a programming language with strong abstraction from the details of the computer. In comparison to low-level programming languages, it may use natural language elements, be easier to use, or be from the specification of the program, making the process of developing a program simpler and more understandable with respect to a
low-level language. The amount of abstraction provided defines how "high-level" a programming language is.
The first high-level programming language to be designed for a computer was Plankalkül, created by KonradZuse. However, it was not implemented in his time, and his original contributions were isolated from other developments.
"High-level language" refers to the higher level of abstraction from machine language. Rather than dealing with registers, memory addresses and call stacks, high-level languages deal with variables, arrays, objects, complex arithmetic or boolean expressions, subroutines and functions, loops, threads, locks, and other abstract computer science concepts, with a focus on usability over optimal program efficiency. Unlike low-level assembly languages, high-level languages have few, if any, language elements that translate directly into a machine's native opcodes. Other features, such as string handling routines, object-oriented language features, and file
input/output, may also be present.
While high-level languages are intended to make complex programming simpler, low-level languages often produce more efficient code. Abstraction penalty is the barrier that prevents high-level programming techniques from being applied in situations where computational resources are limited. High-level programming features like more generic data structures,
run-time interpretation, and intermediate code files often result in slower execution speed, higher memory consumption, and larger binary program size. For this reason, code which needs to run particularly quickly and efficiently may require the use of a lower-level language, even if a higher-level language would make the coding easier. In many cases, critical portions of a program mostly in a high-level language can be hand-coded in assembly language, leading to a much faster or more efficient optimised program.
However, with the growing complexity of modern microprocessor architectures, well-designed compilers for high-level languages frequently produce code comparable in efficiency to what most low-level programmers can produce by hand, and the higher abstraction may allow for more powerful techniques providing better overall results than their low-level counterparts in particular settings.
4. Role-playing game(RPG)
A role-playing game (RPG) is a game in which players assume the roles of characters in a fictional setting. Players take responsibility for acting out these roles within a narrative, either through literal acting, or through a process of structured decision-making or character development. Actions taken within many games succeed or fail according to a formal system of rules and guidelines.
There are several forms of RPG. The original form, sometimes called the tabletop RPG, is conducted through discussion, whereas in live action role-playing games (LARP) players physically perform their characters' actions. In both of these forms, an arranger called a game master (GM) usually decides on the rules and setting to be used and acts as referee, while each of the other players plays the role of a single character.
Several varieties of RPG also exist in electronic media, such as multi-player text-based MUDs and their graphics-based successors, massively multiplayer online role-playing games (MMORPGs). Role-playing games also include single-player offline role-playing video games in which players control a character or team who undertake quests, and may include capabilities that advance using statistical mechanics. These games often share settings and rules with tabletop RPGs, but emphasize character advancement more than collaborative storytelling.
Despite this variety of forms, some game forms such as trading card games and wargames that are related to role-playing games may not be included. Role-playing activity may sometimes be present in such games, but it is not the primary focus. The term is also sometimes used to describe roleplay simulation games and exercises used in teaching, training, and academic research.
Single player role-playing video games form a loosely defined genre of computer and console games with origins in role-playing games such as Dungeons & Dragons, on which they base much of their terminology, settings and game mechanics. This translation changes the experience of the game, providing a visual representation of the world but emphasizing statistical character development over collaborative, interactive storytelling.
Online text-based role-playing games involve many players using some type of text-based interface and an Internet connection to play an RPG. Games played in a real-time way include MUDs, MUSHes, and other varieties of MU*. Games played in a turn-based fashion include
play-by-mail games and play-by-post games.
Massively multi-player online role-playing games (MMORPGs) combine the large-scale social interaction and persistent world of MUDs with graphic interfaces. Most MMORPGs do not actively promote in-character role-playing, however players can use the games' communication functions to role-play so long as other players cooperate. The majority of players in MMORPGs do not engage in role-play in this sense.
5. Artificial intelligence(AI)
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."
The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of optimism, but has also suffered setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.
AI research is highly technical and specialized, deeply divided into subfields that often fail in the task of communicating with each other. Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. General intelligence (or "strong AI") is still among the field's long term goals.
Thinking machines and artificial beings appear in Greek myths, such as T alos of Crete, the bronze robot of Hephaestus, and Pygmalion's Galatea. Human likenesses believed to have intelligence were built in every major civilization: animated cult images were worshipped in Egypt and Greece and humanoid automatons were built by Yan Shi, Hero of Alexandria and
Al-Jazari. It was also widely believed that artificial beings had been created by JābiribnHayyān, Judah Loew and Paracelsus. By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or KarelČapek's R.U.R. (Rossum's Universal Robots). Pamela McCorduck argues that all of these are examples of an ancient urge, as she describes it, "to forge the gods". Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.
Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This, along with concurrent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.
The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. The attendees, including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the leaders of AI research for many decades. They and their students wrote programs that were, to most people, simply astonishing: Computers were solving word problems in algebra, proving logical theorems and speaking English. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".
They had failed to recognize the difficulty of some of the problems they faced. In 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off all undirected exploratory research in AI. The next few years, when funding for projects was hard to find, would later be called the "AI winter".
In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.
In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry. The success was due to several factors: the increasing computational power of computers (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and a new commitment by researchers to solid mathematical methods and rigorous scientific standards.
On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws. In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.
The leading-edge definition of artificial intelligence research is changing over time. One
pragmatic definition is: "AI research is that which computing scientists do not know how to do cost-effectively today." For example, in 1956 optical character recognition (OCR) was considered AI, but today, sophisticated OCR software with a context-sensitive spell checker and grammar checker software comes for free with most image scanners. No one would any longer consider already-solved computing science problems like OCR "artificial intelligence" today.
Low-cost entertaining chess-playing software is commonly available for tablet computers. DARPA no longer provides significant funding for chess-playing computing system development. The Kinect which provides a 3D body–motion interface for the Xbox 360 uses algorithms that emerged from lengthy AI research, but few consumers realize the technology source.
AI applications are no longer the exclusive domain of Department of defense R&D, but are now common place consumer items and inexpensive intelligent toys.
In common usage, the term "AI" no longer seems to apply to off-the-shelf solved
computing-science problems, which may have originally emerged out of years of AI research.
6. Virtual Reality(VR)
Virtual reality (VR), also known as virtuality, is a term that applies to computer-simulated environments that can simulate physical presence in places in the real world, as well as in imaginary worlds. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Furthermore, virtual reality covers remote communication environments which provide virtual presence of users with the concepts of telepresence and telexistence or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus, and omnidirectional treadmills. The simulated environment can be similar to the real world in order to create a lifelike experience—for example, in simulations for pilot or combat training—or it can differ significantly from reality, such as in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality experience, due largely to technical limitations on processing power, image resolution, and communication bandwidth; however, the technology's proponents hope that such limitations will be overcome as processor, imaging, and data communication technologies become more powerful and cost-effective over time.
Virtual reality is often used to describe a wide variety of applications commonly associated with immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays, database gloves, and miniaturization have helped popularize the notion. In the book The Metaphysics of Virtual Reality by Michael R. Heim, seven different concepts of virtual reality are identified: simulation, interaction, artificiality, immersion, telepresence, full-body immersion, and network communication. People often identify VR with head mounted displays and data suits.
Virtual reality can trace its roots to the 1860s, when 360-degree art through panoramic murals began to appear. An example of this would be Baldassare Peruzzi's piece titled, SaladelleProspettive. In the 1920s, vehicle simulators were introduced. Morton Heilig wrote in the 1950s of an "Experience Theatre" that could encompass all the senses in an effective manner, thus drawing the viewer into the onscreen activity. He built a prototype of his vision dubbed the Sensorama in 1962, along with five short films to be displayed in it while engaging multiple senses (sight, sound, smell, and touch). Predating digital computing, the Sensorama was a mechanical device, which reportedly still functions today. Around this time, Douglas Englebart uses computer screens as both input and output devices. In 1966, Thomas A. Furness III introduces a visual flight stimulator for the Air Force. In 1968, Ivan Sutherland, with the help of his student Bob Sproull, created what is widely considered to be the first virtual reality and augmented reality (AR) head mounted display (HMD) system. It was primitive both in terms of
user interface and realism, and the HMD to be worn by the user was so heavy it had to be suspended from the ceiling. The graphics comprising the virtual environment were simple wireframe model rooms. The formidable appearance of the device inspired its name, The Sword of Damocles. Also notable among the earlier hypermedia and virtual reality systems was the Aspen Movie Map, which was created at MIT in 1977. The program was a crude virtual simulation of Aspen, Colorado in which users could wander the streets in one of three modes: summer, winter, and polygons. The first two were based on photographs—the researchers actually photographed every possible movement through the city's street grid in both seasons—and the third was a basic 3-D model of the city. In the late 1980s, the term "virtual reality" was popularized by Jaron Lanier, one of the modern pioneers of the field. Lanier had founded the company VPL Research in 1985, which developed and built some of the seminal "goggles and gloves" systems of that decade. In 1991, Antonio Medina, a MIT graduate and NASA scientist, designed a virtual reality system to "drive" Mars rovers from Earth in apparent real time despite the substantial delay of Mars-Earth-Mars signals. The system, termed "Computer-Simulated Teleoperation" as published by Rand, is an extension of virtual reality.
7. Software Engineering
Software Engineering (SE) is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software. It is the application of Engineering to software because it integrates significant mathematics, computer science and practices whose origins are in Engineering. It is also defined as a systematic approach to the analysis, design, assessment, implementation, test, maintenance and reengineering of software, that is, the application of engineering to software. The term software engineering first appeared in the 1968 NATO Software Engineering Conference, and was meant to provoke thought regarding the perceived "software crisis" at the time.
Software development, a much used and more generic term, does not necessarily subsume the engineering paradigm. Although it is questionable what impact it has had on actual software development over the last more than 40 years, the field's future looks bright according to Money Magazine and Salary.com, which rated "software engineer" as the best job in the United States in 2006.
When the first modern digital computers appeared in the early 1940s, the instructions to make them operate were wired into the machine. Practitioners quickly realized that this design was not flexible and came up with the "stored program architecture" or von Neumann architecture. Thus the division between "hardware" and "software" began with abstraction being used to deal with the complexity of computing.
Programming languages started to appear in the 1950s and this was also another major step in abstraction. Major languages such as Fortran, ALGOL, and COBOL were released in the late 1950s to deal with scientific, algorithmic, and business problems respectively. E.W. Dijkstra wrote his seminal paper, "Go T o Statement Considered Harmful", in 1968 and David Parnas introduced the key concept of modularity and information hiding in 1972 to help programmers deal with the ever increasing complexity of software systems. A software system for managing the hardware called an operating system was also introduced, most notably by Unix in 1969. In 1967, the Simula language introduced the object-oriented programming paradigm.
These advances in software were met with more advances in computer hardware. In the mid 1970s, the microcomputer was introduced, making it economical for hobbyists to obtain a computer and write software for it. This in turn led to the now famous Personal Computer (PC) and Microsoft Windows. The Software Development Life Cycle or SDLC was also starting to appear as a consensus for centralized construction of software in the mid 1980s. The late 1970s and early 1980s saw the introduction of several new Simula-inspired object-oriented
programming languages, including Smalltalk, Objective-C, and C++.
Open-source software started to appear in the early 90s in the form of Linux and other software introducing the "bazaar" or decentralized style of constructing software. Then the World Wide Web and the popularization of the Internet hit in the mid 90s, changing the engineering of software once again. Distributed systems gained sway as a way to design systems, and the Java programming language was introduced with its own virtual machine as another step in abstraction. Programmers collaborated and wrote the Agile Manifesto, which favored more lightweight processes to create cheaper and more timely software.
The current definition of software engineering is still being debated by practitioners today as they struggle to come up with ways to produce software that is "cheaper, better, faster". Cost reduction has been a primary focus of the IT industry since the 1990s. T otal cost of ownership represents the costs of more than just acquisition. It includes things like productivity impediments, upkeep efforts, and resources needed to support infrastructure.