Professional Documents
Culture Documents
ASSIGNMENT
Session overview:
1. Assignment Aim
2. Assignment Objectives
3. Report overview
4. Recommended division of workload
5. Tutorials
6. Peer to peer review
7. Support
8. General Feedback on the 2016 Assignment
9. Questions
TUTORIALS
THURSDAY FRIDAY
1:30pm – 4:30pm 11am-12:30 & 1.30pm-3.30pm
The aim:
LOGISTICS
Group project: requires collaboration
Interdisciplinary teams
5 members (pre-allocated)
Masters level subject: 50% mark
The objectives:
Learn (through research) about the range of different green building rating tools on the market
Learn about how rating tools work and the process of undertaking an evaluation
To learn about how sustainability criteria/credits can be applied within the built environment
To learn about what evidence is required to demonstrate success application of sustainability criteria/credits within
the built environment
3. EVALUATION STAGE:
**Important Note**
You are not designing
a building. Instead you are
designing a rating system,
and are to demonstrate your
understanding of how
it might be applied to a
building.
TUTORIAL 1
Describe (citing existing rating tools) how a Describe (citing existing rating tools) what types
Describe how your rating tool works (what is its
design team might apply the tool’s criteria of evidence can be used to demonstrate that
relationship back to your group’s definition of
(i.e. specific credits) to a development. the tool’s criteria have been met.
sustainability)
‘Criteria’
‘Credits’
‘References’
(Look beyond
Green Star)
Refer to Assignment Q & A slides
Assignment Roadmap Group number: Date/time of individual
We define sustainability as:
consultation with Tutor:
- points %
- points %
- points %
This project has the following unique
spatial requirements, services and/or
user functions: (lecture theatre, large
- points %
HVAC, exhaust, learning… etc.)
- points %
Total number of points (Total points)
awarded by rating tool:
100 %
The name of our rating tool is:
The ‘sustainability’ of the project will The different levels of sustainability awarded by the rating tool are:
be assessed at the following stage: (It is recommended that you identify between 3-5 levels)
(design, as built, post occupancy etc.)
Justification of
tool stage (1 mark)
Description of
Building type and
unique features
( 1 mark)
Discussion of
Tool criteria
(5 marks)
Responsible: Responsible: Responsible: Responsible:
Justification of
criteria weighting
according to
Sustainability
definition (5 marks)
Summary of how
each credit relates ALL
to project and may
be applied and
Issues related to
HVAC, IEQ, Water
(15 marks)
Summary of
evidence required ALL
for each credit to
be awarded and
Issued related to
GHG emissions and
embodied energy
(10 marks)
Responsible: Responsible: Responsible: Responsible:
Conclusion
(2 marks)
ASSIGNMENT
Any unique features of that building What aspects of sustainability are the
typology (i.e. type of services or lectures talking about?
specific areas required)
Bring these observations to tutorial…
The ‘stage’ which the assessment
applies to (i.e. design, at completion,
post-occupancy) The lecture program and
having contact with these
The range of criteria that your tool
experts is your best resource
uses to assess ‘sustainability’
this week!
How each criteria is weighted
according to the evaluation So ask questions…
framework
ASSIGNMENT
ASSIGNMENT
Support:
On Thursday inform your tutor of all the questions you have about the assignment… these will be
answered on Friday in the first hour (all tutorial groups will be given the same Q&A presentation)
30min face to face group meeting with Pippa between October 5th and 23rd in Baldwin Spencer
Student lounge. After hours appointments available. (Doodle schedule to be circulated).
NOTE:
Do not email assignment questions to Pippa. Use the LMS.
If you experience any group dynamic problems, email these to Pippa, as soon as any issues arise.
TUTORIAL 1
ADJUSTMENT FACTOR
5. Ordinary: Often did what she/he was supposed to do,
minimally prepared and cooperative
Very good
4. Marginal: Sometimes failed to show up or complete
assignments, rarely prepared
Excellent Very good
3. Deficient: Often failed to show up or complete
assignments, rarely prepared
Adjustment
Factor = 1.1
2. Unsatisfactory: Consistently failed to show up or
complete assignments, unprepared
Group assignment mark = 40/50 (H1)
1. Superficial: Practically no participation
Your individual assignment mark
calculated using 40 x 1.1 = 44/50 (H1+) 0. No show: No participation at all
0
TUTORIALS
Peer to Peer Review
1.2
EXAMPLE 2: 8. Excellent: Consistently went above and beyond -
Negative experience tutored teammates, carried more than her/his fair share of
the load
ADJUSTMENT FACTOR
5. Ordinary: Often did what she/he was supposed to do,
minimally prepared and cooperative
Very good
4. Marginal: Sometimes failed to show up or complete
assignments, rarely prepared
Unsatisfactory Marginal
3. Deficient: Often failed to show up or complete
assignments, rarely prepared
Adjustment
Factor = 0.6
2. Unsatisfactory: Consistently failed to show up or
complete assignments, unprepared
Group assignment mark = 40/50 (H1)
1. Superficial: Practically no participation
Your individual assignment mark
calculated using 40 x 0.6 = 24/50 (P) 0. No show: No participation at all
0
ASSIGNMENT
ASSIGNMENT
The overall quality of the assignments was very high, with an impressive variety
of different ideas and approaches to rating tools.
Please note that in the marking strategy, half marks were not awarded.
Note that some of the images were very pixelated, making these hard to read.
Groups were not penalised, but pixelated images did make the report seem less
professional.
For reports of this size (some up to 70 pages!), a single bulldog clip is not
adequate for holding the report together. Use blinding or staples. Groups were
not penalised, but it did make the report seem less professional.
Points were not awarded for the use of sub-headings, but I will note how these
helped with the communication of your ideas/content, which is particularly
important in large reports.
ASSIGNMENT
REPORT STRUCTURE:
To achieve full marks in the introduction it was necessary to state the objective
of the report and provide an outline of the report structure. In a large report, the
reader needs to understand the logic of the report (how have you structured it?)
The report structure is not the contents page.
To achieve full marks in the conclusion, it was necessary to restate the report
objective and outline how the objective was met. Many groups provided
conclusions that were too brief, which standalone contained an inadequate level
of detail. Many of the groups that were particularly successful, used their
conclusion to outline the limitations of their tool, or areas for further
development.
ASSIGNMENT
APPLICATION OF RESEARCH:
Half the marks (2) were awarded for citing the research, while the other half (2)
were awarded for evidence in the report of groups critically engaging with what
the research stated.
Critical engagement means using your understanding of the topic to relate the
research to appropriate points and critique it. For example, some of the
literature might be “old” (i.e. if it was published 30 years ago, it might no longer
be as relevant, or have been superseded….) or it may relate to a different context
(locality/building type). If this was the case, then discuss it.
CITATION:
The marking was very strict. To gain the full mark you needed to:
Use a consistent referencing style. Harvard was preferred, but APA was also
acceptable.
Cite all sources correctly. Please note that the “quotes” require a page
number. If no page number was supplied you lost the point.
All figures in the report needed to be referenced, with a caption descripting what
it is of. Note the caption should include the source.
All sources cited in text were required to be listed in the references (some were
missing...)
All online sources needed to include the date they were accessed.
ASSIGNMENT
DISCUSSION OF TOOL:
Overall, this was the section of the report where groups lost the most points. I
cannot stress the importance of clearly communicating the concept for the rating
tool. The most common mistakes that groups were penalised for:
Using the Report Introduction or Executive Summary to introduce the tool and
describe how it works. Generally speaking, this was not appropriate. Instead it
was confusing, particularly as the report then went directly into the
design/application stage (making it appear as if a whole chunk of the report was
missing!)
Inadequate detail about the criteria was weighted (why is IEQ worth 20%??)
ASSIGNMENT
DISCUSSION OF TOOL:
No discussion about the stage of the tool’s assessment. It was not enough to just
state what the stage was. It was expected that the assignment should describe
why that choice was significant. (Why POE? Why Design?)
Your group’s tool was not different enough from existing tools. In the face-to-face
meetings I cautioned your group if I felt this was a risk. Some of the groups did
not articulate clearly why their tool was unique and needed a customised version
and then provided too little detail about the points of difference.
Under ‘description of unique features’ it was not adequate to just list the spaces.
Your group needed to describe why this spaces were unique to the chosen
building typology.
ASSIGNMENT
APPLICATION OF TOOL:
Overall, this was the section was very well done, with the research involving
existing tools thoroughly demonstrated. Groups were penalised for the following
reasons:
The choice of evidence did not relate to the stage of the assessment.
Some groups recommended evidence that would not exist for the stage of the
assessment. For example, at the design stage it would be impossible for owners
to produce their electricity bill. This is evidence required post occupancy.
The choice of reference material was not appropriate (i.e Groups chose to
reference existing tools that were not consistent with their concept).
THURSDAY FRIDAY
1:30pm – 4:30pm 11am-12:30 & 1.30pm-3.30pm