| Category | Assignment | Subject | Computer Science |
|---|---|---|---|
| University | Canterbury Christ Church University (CCCU) | Module Title | U19967 Artificial Intelligence |
| Academic Year | 2025/26 |
|---|
|
Course Name |
Computer Science, Data Computing Intelligence |
||
|
Module Title |
Artificial Intelligence |
||
|
Module Code |
U19967 |
||
|
Module Start Date / Cohort |
September 2025 |
||
|
Module Level |
6 |
Assessment Type(s) |
Portfolio |
|
Word Length / Duration |
No actual word count |
% weighting |
50 % |
|
Deadline (date & time) for Submission |
21/11/2025 at 2pm |
Format/Location of submission |
Electronic copy via Turnitin + video recording/ Blackboard |
Feedback to support the work will be given in the classes
Formal feedback will be within 15 working days after the submission deadline.
In predicate logic, there are many different rules of inference. Being universally true, they may be used either to validate complete arguments or to generate conclusions. Moreover, individual rules of inference may either be used on their own or may be applied in conjunction with others.
Given the following list of basic inference rules:
1.Modus ponendo ponens (MPP): A B, A |-- B
(e.g. IF my program is correct THEN it will run, my program is correct, THEREFORE it will run)
2.Modus tollendo tolens (MTT): AB, ~ B |-- ~A
(e.g. IF my program is correct THEN it will run, my program will NOT run, THEREFORE it is not correct)
3.Double negation (DN): A |-- ~(~A)
(e.g. My program has run THEREFORE My program has not not run)
4.&introduction (&INT) A,B |--(A & B)
(e.g. My program has run; it is correct THEREFOR my program has run AND is correct)
5.Reductio ad absurdum (RAA): AB, A~B |-- ~A
(e.g. IF my program is correct THEN it will run, IF my program is correct THEN it will NOT run THEREFORE My program is not correct)
6.Universal specialisation (US): Ɐ(X) W(X), A |-- W(A)
(e.g. All things which are computers are unreliable, a ‘TIPTOP’ is a computer THEREFORE a
‘TIPTOP’ is unreliable).
A.Given the rule:
‘ IF artificial intelligence is a growing subject THEN there is no shortage of applicants’ and the fact ‘there is a shortage of applicants’
The following is the rule set of a simple expert system for diagnosing plant issues:
A.Use forward chaining to reason about the weather if the working memory contains the facts: leaves are yellow, soil is dry, temperature is high. Show your answer in a table naming the rules matching the current working memory (conflict set), which rule you apply, and how the working memory contents changes on the next cycle after a rule has fired.
| Cycle | Working Memory | Conflict Set | Rule fired |
B.Use backward chaining to reason about the plant issues diagnosis if the working memory contains the fact: problem = root rot risk. Show your answer in a similar table.
C.Provide your own example demonstrating backward chaining to reason about the plant issue diagnosis. Show your answer in a similar table.
D.Suppose that the user interface of our expert system allows the system to ask a user about the facts whether they are true or false. What question (or questions) should the system ask the user in order to conclude that the diagnosis is improve the soil drainage? What will the user answer? Which rule will require clarification from the user?
Can you design then implement a mini medical diagnosis expert system in prolog:
1.Define a set of symptoms and corresponding facts that represent medical conditions: Example:
Fact: "Fever above 38°C"
Fact: "Persistent cough for more than two weeks" Fact: "Severe headache accompanied by nausea" Fact: "Shortness of breath"
Fact: "Joint pain and swelling"
2.Create rules based on medical knowledge to infer possible diagnoses from the symptoms provided. For example:
Rule: If the patient has a fever above 38°C and persistent cough for more than two weeks, consider tuberculosis.
Rule: If the patient has shortness of breath and wheezing, consider asthma.
Rule: If the patient has severe headache accompanied by nausea and vomiting, consider migraine or meningitis.
Let a Finite State Machine (FSM) A be defined by A = (Q, Σ, q0, δ, F) with: Q
= {0, 1, 2, 3} Σ = {a, b} q0 = 0 F = {3}
And the transition function δ:
|
q |
t |
q’ |
|
0 |
a |
1 |
|
0 |
a |
2 |
|
0 |
b |
2 |
|
Q |
t |
q’ |
|
1 |
a |
3 |
|
2 |
b |
2 |
|
2 |
b |
3 |
1.Draw this FSM (hand-drawn).
2.Give the shortest word recognized by the automate.
3.Give an example of a word non recognized by the FSM.
B.Design a finite state machine (FSM) for a non-player character (NPC) that can navigate a maze. Provide the formal definition of your FSM, including:
Need Expert Help for Your U19967 Artificial Intelligence Assignment?
Request to Buy AnswerChoose a local, real-world domain that you are personally familiar with - such as:
1.Identify the domain and describe it in 3–5 sentences.
2.Design a simple ontology for this domain, including at minimum:
3.Design the ontology as a diagram (using Protege).
4.Explain your modeling choices, especially:
You are expected to submit two elements:
In the appendix pages 5 and 6.
-In class recordings.
-Use the last 30 minutes of the practical sessions to seek feedback from the module leader.
|
Activity |
Excellent 100-80 |
Very Good 79-70 |
Good 69-60 |
Sound 59-50 |
Satisfactory 49-40 |
Fail 39-0 |
|
Activity 1 (20%) |
Demonstrates a comprehensive understanding of predicate logic and inference rules. Correctly formalises all statements, applies the appropriate rule accurately and provides a valid alternative proof. Logical steps are clearly presented, rule names are identified, and reasoning is fully justified. The formal summary and plain-English explanation are precise, coherent, and well- structured. |
Shows strong understanding with accurate use of inference rules and mostly correct formalisation. Minor errors in notation, explanation, or structure, but overall reasoning is valid. Both proofs are largely correct and clearly expressed. |
Displays a reasonable understanding of inference principles. Main proof is mostly correct but may contain small logical or structural errors. Alternative proof is attempted but incomplete or partially justified. Presentation and explanation are clear but not comprehensive. |
Demonstrates basic understanding of inference. Some correct steps identified, but reasoning is partially flawed or lacks clarity. May confuse rules or omit necessary justification. Limited explanation and structure. |
Minimal understanding shown. Attempt made to formalise or apply rules, but with major gaps or incorrect logic. Little or no coherent explanation. |
No valid formalisation or logical reasoning. Misapplication of rules, missing proofs, or incoherent response. Fails to demonstrate understanding of predicate logic or inference processes. |
|
Activity 2 (20%) |
Demonstrates an in-depth understanding of forward and backward chaining. All reasoning steps are logically sound, clearly presented, and complete. Effectively identifies user questions and links them correctly to rules. Work is highly structured, with no or minimal errors. |
Shows a strong understanding of expert system reasoning. Minor errors in chaining steps or conflict set identification may exist but do not affect the logic. User interaction analysis is relevant and mostly accurate. Work is clear and mostly well- organized. |
Shows a good understanding of chaining techniques with mostly correct logic. Some conflict sets or rule firings may be missing or misidentified. User interaction question may lack clarity but is on the right track. |
Demonstrates basic understanding with several logical or structural issues. Forward/backward chaining may have errors or missing steps. User question is vague or partially incorrect. Work meets basic requirements. |
Limited understanding of expert system reasoning. Significant errors or omissions in chaining steps. User interaction component is incorrect or unclear. Work lacks structure or clarity |
Fails to demonstrate understanding of forward/backward chaining or expert system logic. Steps are incorrect, incomplete, or missing. User question is irrelevant or missing. Work does not meet the minimum standard. |
|
Activity 3 (20%) |
Demonstrates an expert- level understanding of expert systems and Prolog. Accurately defines appropriate symptoms and conditions. Rules are medically sound, logically valid, and well-structured in Prolog. System runs without errors and provides correct diagnoses. Code is clean, modular, and well-commented. |
Strong implementation with mostly accurate symptoms and diagnoses. Rules show solid understanding of logic and are implemented correctly. Minor syntax or logic issues may exist but do not affect system functionality significantly. Well- |
Defines a reasonable set of symptoms and conditions. Rules generally make sense and mostly work in Prolog. Some issues in logic or structure may lead to minor inaccuracies in diagnosis. Code may have minor errors or lack clarity but shows competence. organized code with minimal errors. |
Basic but functional Prolog implementation. Symptoms and rules are present but may be overly simplistic, incomplete, or not fully aligned with medical logic. System runs but with noticeable issues. Code may be disorganized or partially incorrect. |
Minimal implementation. Symptoms and rules are vague or not well-aligned with realistic diagnoses. Several syntax or logical errors. System may not run correctly or provide valid results. Code lacks structure and clarity. |
Fails to implement a working expert system. Major flaws in Prolog syntax, logic, or understanding of expert systems. System does not compile or produces incorrect/incoherent diagnoses. Symptoms and |
|
Activity 4 (20%) |
Demonstrates a comprehensive understanding of FSMs. In Part A, all answers are correct: diagram is accurate, shortest and rejected words are correctly identified. In Part B, the FSM design is clear, logically sound, fully defined (states, transitions, alphabet, etc.), and includes a well-labeled diagram. Work shows originality and completeness. |
Shows strong understanding. Part A is mostly correct with only minor issues (e.g., minor diagram error or alternate valid word). Part B FSM is mostly correct with complete definitions and a clear diagram, though may lack some clarity or optimal design |
Understands FSM concepts. Minor errors in Part A (e.g., word recognition mistake or missing transition in diagram). Part B shows a valid FSM with appropriate structure, but might have gaps in definitions, transitions, or clarity in the diagram. |
Demonstrates basic understanding. Multiple errors in Part A, such as incorrect diagram or word analysis. Part B FSM is simplistic or incomplete, with missing states, transitions, or unclear diagram. Logic is somewhat coherent but underdeveloped. |
Limited understanding of FSMs. Part A has several inaccuracies or missing components. Part B FSM is vague, poorly defined, or lacks a usable diagram. Transitions or logic may be incorrect or arbitrary. |
Work shows little or no understanding of FSM concepts. Part A is mostly or entirely incorrect. Part B FSM is missing, nonsensical, or completely incorrect. Diagram is missing or irrelevant. |
|
Activity 5 (20%) |
Demonstrates deep understanding of ontology design and semantic modeling. The domain is clearly explained and locally grounded. Ontology includes well-chosen classes, subclasses, object and data properties, and meaningful individuals. Diagram is precise and complete. Modeling choices are clearly justified, with strong use of SWRL/SQWRL for reasoning. Assumptions and challenges are well- articulated. |
Strong work with a clear domain description. Ontology is correctly structured with appropriate elements, and the diagram is mostly accurate. Reasoning examples are relevant, though may lack depth or contain minor errors. Good justification for design choices. |
Adequate domain and ontology. Some components may be simplistic or imbalanced (e.g., too few properties or missing subclass relationships). Diagram is present but may have clarity issues. Reasoning examples work but may not fully reflect ontology structure. Design rationale is somewhat general. |
Basic understanding of ontologies is evident. Domain is minimally described. Ontology includes required components but lacks clear structure or depth. Diagram may be confusing or incomplete. Reasoning use is minimal or partly inaccurate. Justification is limited. |
Limited ontology structure. Required elements are missing or poorly implemented. Diagram may be poorly constructed or missing key parts. Reasoning is absent or incorrect. Minimal explanation of modeling choices. Domain connection is vague. |
Ontology is incomplete or incorrect. Missing essential elements like classes, relationships, or individuals. Diagram is absent or irrelevant. No reasoning examples provided. Modeling explanation is unclear or missing. Domain is not meaningfully described. |
Buy Custom Answer Of This U19967 Assignment & Raise Your Grades
Get A Free QuoteDo you need last-minute online assignment help with your U19967 Artificial Intelligence at CCCU? We’re here for you! Our experienced writers deliver high-quality, AI-free, and plagiarism-free assignments at affordable rates. We know how important your grades are, which is why we guarantee on-time delivery and full academic support. You are assured that our Computer Science Assignment Help will make you productive and help you achieve high grades in your academic year. We’ve got you covered. Check out our free assignment example and see the quality for yourself. We’re available 24/7 to help you succeed in your academic journey. Contact us now to get expert help and score better—without any stress!
Hire Assignment Helper Today!
Let's Book Your Work with Our Expert and Get High-Quality Content