First blog post




Today is the last date for our blog posting. I enjoyed a lot in this course. CLARE our tutor actually helps us a lot to understand about the research. Through the exercises we have done in the classes we learned what all included in a research. Not only about the research CLARE will discuss about other topics  and her experiences in the class which is actually helpful to us to get more information about the computer field. It was really exciting class for me because all the day i learned new things from the class.

I am still working on my assignment 3 and i am still thinking about my research proposal .LOL

I would like to thanks  Dr. Clare Atkins for being my class coordinator and support us.Also my hearty thanks to BELMA and LIZ who took classes for us. It also helped us alot.



In our last classes of research more we discuss about the third assignment .CLARE was not there in our class for few weeks. BELMA who taught as more details about the assignment 3 and about ethical consideration. Actually i did not know what was it. In class she asked us few questions based on ethical consideration and we want to think about it and we discuss about that in groups. We all have different opinions. By that arguments we learned what is actually ethical consideration. she also give a clear picture of ethical considerations.

  • Ethical considerations

Ethical Approval Process

  1. Protection:
  2. Confidence
  3. Confirmation


  1. Assess/Minimize Risk
  2. Voluntary Participation
  3. Informed Consent
  4. Confidentiality



Today, we discuss about how to create user views and how we can give privileges to the new users. How a user can do changes in the tables . Changes such as a user can do as follows

  • add tables
  • delete table
  • update table

How the user can do all these functions and up to what extend we can provide all these functions is discussed in the class. After that we discuss about one more time about the milestone 3 and example of the report is given in the moodle site and tutor and us discuss our doubts based on the example report.. User can also create clustered indexed views and can modify data through a view. Moreover, there is a keyword called grant which you can use such as grant user permissions and other things.  A user and login are not same they always stay as different things. Tutor told us to find more about that we don’t want to include in our project. On the other side it actually deals with security of the database. It is a vast area to do and study in  practical base.

At last of this class we discuss about the positive and negative feedback of overall course and all the students shared their experience and opinions in the class with TODD.


Eventhough all are researching on artificial intelligence there are so many controversies about that. Someone is telling that there exist some dangers also .Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A Simon wrote in 1965: “machines will be capable, within twenty years, of doing any work a man can do”; obviously this prediction failed to come true. Microsoft co-founder Paul Allen believes that such intelligence is unlikely this century because it would require “unforeseeable and fundamentally unpredictable breakthroughs” and a “scientifically deep understanding of cognition”. Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster than light spaceflight.Optimism that AGI is feasible waxes and wanes, and may have seen a resurgence in the 2010s: around 2015, computer scientist Richard sutton averaged together some recent polls of artificial intelligence experts and estimated a 25% chance that AGI will arrive before 2030, but a 10% chance that it will never arrive at all.

Risk of human extinction

The creation of artificial general intelligence may have repercussions so great and so complex that it may not be possible to forecast what will come afterwards. Thus the event in the hypothetical future of achieving strong AI is called the technological singularity, because theoretically one cannot see past it. But this has not stopped philosophers and researchers from guessing what the smart computers or robots of the future may do, including forming a utopia by being our friends or overwhelming us in an AI takeover. The latter potentiality is particularly disturbing as it poses an risk for man.

Self-replicating machines

Smart computers or robots would be able to produce copies of themselves. They would be self-replicating machines. A growing population of intelligent robots could conceivably outcompete inferior humans in job markets, in business, in science, in politics (pursuing robot rights), and technologically, sociologically (by acting as one), and militarily. See also swarm intelligence.

Emergent superintelligence

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself – a feature called “recursive self-improvement”. It would then be even better at improving itself, and would probably continue doing so in a rapidly increasing cycle, leading to an intelligence explosion and the emergence of super intelligence. Such an intelligence would not have the limitations of human intellect, and might be able to invent or discover almost anything.

Hyper-intelligent software might not necessarily decide to support the continued existence of mankind, and might be extremely difficult to stop.This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization and planet.

One proposal to deal with this is to make sure that the first generally intelligent AI is friendly AI, that would then endeavor to ensure that subsequently developed AIs were also nice to us. But, friendly AI is harder to create than plain AGI, and therefore it is likely, in a race between the two, that non-friendly AI would be developed first. Also, there is no guarantee that friendly AI would remain friendly, or that its progeny would also all be good.


Actually studying this research subject i became interested in doing research on artificial intelligence and i am finding more about it. My findings are written below.

Many different definitions of intelligence have been proposed (such as being able to pass the Turing test but to date, there is no definition that satisfies everyone.However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:

  • reason, use strategy, solve puzzles, and make judgments under uncertanity;
  • represent knowledge, including commonsense knowledge;
  • plan;
  • learn;
  • communicate in natural language;
  • and integrate all these skills towards common goals.

Other important capabilities include the ability to sense (e.g. see) and the ability to act (in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard. Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) and autonomy. Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.

Tests for confirming operational AGI

Scientists have varying ideas of what kinds of tests a human-level intelligent machine needs to pass in order to be considered an operational example of artificial general intelligence. A few of these scientists include the late Alan Turing, Steve Wozniak, Ben Goertzel, and Nils Nilsson. A few of the tests they have proposed are:

The Turing Test (Turing)
In the Turing Test, a machine and a human both converse sight unseen with a second human, who must evaluate which of the two is the machine.
The Coffee Test (Wozniak)
A machine is given the task of going into an average American home and figuring out how to make coffee. It has to find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.
The Robot College Student Test (Goertzel)
A machine is given the task of enrolling in a university, taking and passing the same classes that humans would, and obtaining a degree.
The Employment Test (Nilsson)
A machine is given the task of working an economically important job, and must perform as well or better than the level that humans perform at in the same job.

These are a few tests that cover a variety of qualities that a machine might need to have to be considered AGI, including the ability to reason and learn.

Problems requiring AGI to solve

The most difficult problems for computers to solve are informally known as “AI-complete” or “AI-hard”, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.

AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.

Currently, AI-complete problems cannot be solved with modern computer technology alone, and also require human computation. This property can be useful, for instance to test for the presence of humans as with CAPTCHAs, and for computer security to circumvent brute-force attacks.

Mainstream AI research

History of mainstream research into strong AI (ASI)

ASI is an Alternative Super Intelligence

Modern AI research began in the mid 1950s.[ The first generation of AI researchers was convinced that strong AI (ASI) was possible and that it would exist in just a few decades. As AI pioneer Herbert A. Simon wrote in 1965: “machines will be capable, within twenty years, of doing any work a man can do.” Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke‘s character HAL 9000, who accurately embodied what AI researchers believed they could create by the year 2001. Of note is the fact that AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved,”although Minsky states that he was misquoted.[citation needed]

However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. The agencies that funded AI became skeptical of strong AI (ASI) and put researchers under increasing pressure to produce useful technology, or “applied AI”. As the 1980s began, Japan’s fifth generation computer project revived interest in strong AI (ASI), setting out a ten-year timeline that included strong AI (ASI) goals like “carry on a casual conversation”. In response to this and the success of expert systems, both industry and government pumped money back into the field. However, the market for AI spectacularly collapsed in the late 1980s and the goals of the fifth generation computer project were never fulfilled.For the second time in 20 years, AI researchers who had predicted the imminent arrival of strong AI (ASI) had been shown to be fundamentally mistaken about what they could accomplish. By the 1990s, AI researchers had gained a reputation for making promises they could not keep. AI researchers became reluctant to make any kind of prediction at all and avoid any mention of “human level” artificial intelligence, for fear of being labeled a “wild-eyed dreamer.”

Current mainstream AI research

In the 1990s and early 21st century, mainstream AI has achieved a far higher degree of commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as neural networks, computer vision or data mining.These “applied AI” applications are now used extensively throughout the technology industry and research in this vein is very heavily funded in both academia and industry.

Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems using an integrated agent architecture, cognitive architecture or subsumption architecture. Hans Moravec wrote in 1988:

“I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts.”

However, much contention has existed in AI research, even with regards to the fundamental philosophies informing this field; for example, Stevan Harnad from Princeton stated in the conclusion of his 1990 paper on the Symbol Grounding Hypothesis that:

“The expectation has often been voiced that “top-down” (symbolic) approaches to modeling cognition will somehow meet “bottom-up” (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) — nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).”

Artificial general intelligence research

Artificial general intelligence (AGI) describes research that aims to create machines capable of general intelligent action. The term was introduced by Mark Gubrud in 1997 in a discussion of the implications of fully automated military production and operations. The research objective is much older, for example Doug Lenat‘s Cyc project (that began in 1984), and Allen Newell‘s Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as “producing publications and preliminary results”. As yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near(i.e. between 2015 and 2045) is plausible.[33] Most mainstream AI researchers doubt that progress will be this rapid. Organizations actively pursuing AGI include the Machine Intelligence Research Institute, the OpenCog Foundation, the Swiss AI lab IDSIA, Numenta and the associated Redwood Neuroscience Institute.

Processing power needed to simulate a brain

Whole brain emulation

A popular approach discussed to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.[34] Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.


Today, in class we discuss about how to measure the database size and how the insert,select,create,update,delete commands will work with example. Then we discuss about how we can implement that in milestone 3 and what all things we have to do for that. Todd asked us to do rough calculation of btrover space management in milestone 3.

Tables are in a database called a partition.Clustered are indexed in the storage. Only one clustered index per table. These indexes are implemented as a B-Tree. Todd explain this to us by showing 2 videos based on it in the class. We then created our own clustered index in the SQL Server. Primary key is already a clustered index by default. I think it is the hardest part because i found it as difficult to do.


I am actually thinking about my project for last 3 months (the day when i started my course). I have still confusion in my project that what i want to do. I have interest in doing research on artificial intelligence. Because still the research is going on artificial intelligence and no one find the solution for it. So it seems interesting for me. Artificial intelligence is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fission and future studies. Artificial general intelligence is also referred to as “strong AI“,”full AI” or as the ability of a machine to perform “general intelligent action”.

Some references emphasize a distinction between strong AI and “applied AI” (also called “narrow AI” or “weak AI”): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to perform the full range of human cognitive abilities.

So for study more about this i need a proper guide and i am not sure about whom will help me further. All my questions are still going on my mind. Today also i discussed my proposal with belma then she told that i want to approach Clare or mark for further my doubts.

DAT 601- MAY 23,26 CLASSES

I was unable to attend these classes and asked my friend about these classes. She told me that the class was about the transaction analysis and the tutor explain about the transaction analysis through example. So i read the notes given by Todd and understand the features about transaction analysis and i looked and examined more examples given for that.

  • Expected frequency of each transaction
  • The relations and attributes access and the type of access:
  • Select, Insert, Update, Delete
  • Attributes used in the SQL (predicates, where clause)
  • Check for pattern matching, range searches, or exact match key retrieve *access structure
  • Join attributes, *access structure

Time constraints imposed on the transaction


The IT area/subject I have most enjoyed is ….

Actually when i am doing my bachelor degree i was interested in database subject.But,now doing research on artificial intelligence..I am getting interested in that subject.I like research subject in my graduate diploma because it is making my head working at all the times


The IT area/subject I least enjoyed is…..

Cloud computing.i do not have an idea about that


The IT area/subject I was most interested in is……

Database..I quiet like that..


The one IT thing I never want to have to do again is…………..

I did not even think about that..I am an it student so i want to do all the things in it.


I chose to study IT because…………….             

I want IT as my profession in future


If I couldn’t study IT I would study………..

I will try for medicine.. I love doctors and i love the way they are behaving to patients..


When I was a kid I wanted to be………………..

A Doctor…..Pediatrician..I love to spend my time with children’s


One IT thing I would like to know more about is   ……..

What all things i want to do as an it professional



Today class we are working on SQL sub queries and i was not in the previous classes because of my illness so my friend prerna help me to do that in class. I also asked my tutor about this and he also help me to do the queries.

1st Query:

SELECT Order Id,
Order Date,
SELECT SUM(Quantity)
OrderID = 20005
As TotalQuantity
OrderID = 20005;



1st query


Comments: Lists the  OrderID, OrderDate and TotalQuantity summed for 2005. There is no correlation here. The same date i.e 2005 has been repeated in the main query and the subquery. Not a very good example.

2nd Query:

SELECT [ProductID],
ven.VendorName as VendorName
[dbo].[Product] p INNER JOIN
SELECT [VendorID],[VendorName]
VendorName LIKE ‘Bear%’
) as ven
p.VendorID = ven.VendorID;



1st query

Comments: Inner join between p and ven. Here we havent used AS but it doesnt need to be added. This query lists the ProductID and the Name where VendorName starts with bears.

3rd Query: A



CustID IN (


1st query

Comments: Lists the CustID, Name and Email of customers who have an OrderEntry.

3rd Query: B


[dbo].[Customer] c
SELECT count(CustID) as NumberOfOrders
[dbo].[OrderEntry] p

c.CustID = p.CustID
) >= 2;


1st query

Comments: Lists the CustID, Name and Email of customers who have placed two and more than two Order.