Content uploaded by David John Tree
Author content
All content in this area was uploaded by David John Tree on May 19, 2016
Content may be subject to copyright.
Applying Pedagogic Theory to Render Farm Tool Design
David Tree
School of Creative Arts, University of Hertfordshire
d.tree@herts.ac.uk
ABSTRACT
A key feature of the Animation programme at the University of Hertfordshire is
the creation of a final major project at the culmination of Level 6, following a
similar project in Level 5. Historically, tutors provided formative feedback to
students on technical aspects of their scene files, however due to the quantity of
files this can often be laborious. On review, the tutors found that they were often
providing the same solutions to different students, demonstrating the need to
review our feedback and support system. Initially a simple pipeline tool was
developed to improve usability of our key software package, Autodesk Maya,
however it did not provide any direct feedback to the quality of the file itself.
This lead to students becoming frustrated with projects, ultimately seeking help
from technical staff as they were unsure what to do to resolve issues.
With no suitable tool available on the market, the opportunity to develop a
learning centric tool was presented. This tool would formulate and deliver
technical feedback on scene files. To test the efficacy of this tool a quantitative
methodology was used, which proved that certain types of automated feedback
could effectively educate students. Secondarily the results of this study further
demonstrated that even with readily available feedback students only applied this
feedback when presented with a barrier. Without these barriers students continued
to submit files with known errors regardless of the implications.
INTRODUCTION:
During the course of the Digital Animation program, students are educated in practical
approaches to the creation of short animated films. The process for developing animated
films is governed by the production pipeline (Cantor and Valencia, 2004). In level four
students are taught a holistic view of short film production. Building upon this during level
five, students conceive and create a film following the defined pipeline as a dry run for their
level six productions.
1
Within this animation pipeline is a key process known as ‘rendering’, where a file containing
geometric data is passed through a render engine producing a still image. When combined
and played back at a speed above twelve frames per second, the illusion of motion is created
and this forms the final stage of production before the application of editing and effects. The
average duration of a student film at the University of Hertfordshire is 3 minutes, which
requires in total 3500 frames be processed before its completion. In an industry context,
these files would be processed by a system known as a ‘render farm’ which as defined by the
Oxford English Dictionary as “a group of networked computers for jointly processing a
rendering job, used [especially] in the production of computer-animated films” (OED Online,
2016).
In abstract to this industry practice, in previous years’ students would manually set these
renders to process on individual computers in one hundred frame blocks. This required the
manual configuration of approximately thirty-five machines. Not only did this impair the
students’ efficiency, it also failed to reflect relevant industry practices (Creative Skillset,
2015).
In 2013, student feedback on the rendering process was addressed with the initial introduction
of a render farm. Following this change a marked improvement of render quality was noted
and an increased number of student films were completed on time. The usage guidelines for
the render farm include that no textures should be missing from the file and that the file
should process in under forty-five minutes per frame. Although instructed in these
prerequisites, students continued to flaunt these usage guidelines, leading to some students
experiencing disappointment and unexpected results when collecting their rendered files.
Historically, several approaches to encourage students to follow usage guidelines have been
attempted including group tutorials, class inductions and finally the vetting of scene files by a
technician before submission. Each of these approaches, with exclusion of technical review,
yielded less than successful results; students were complaining that their colleagues were
delaying the render farm queue by generating a large quantity of errors.
Vetting student files was successful in resolving the technical issues introduced by the
students. However, this labour heavy approach was impractical as a long-term solution,
2
especially with pressures to grow cohorts in the future. A key objective of the animation
program is to create industry ready practitioners and this method did not encourage the
necessary independent behaviour, as the student became reliant on the technician. Finally,
with students often working over evenings and weekends, a technician-vetted rendering
service during office hours would be inaccessible when required by the students.
This paper focuses on the development of a tool which provides students with computer-
based, automated formative feedback at the ‘submission to render farm’ stage of the
production pipeline. The objective of this tool is to ensure students develop an understanding
of the requirements of render farm utilisation in the context of industry aligned rendering
techniques.
LITERATURE REVIEW:
A requirement of the Creative Skillset accreditation program, known as the ‘Tick’, is that
supported by industry partners, the university must train work ready practitioners. (Creative
Skillset, 2015). Ensuring students have the requisite professional skills to work in the
industry is achieved by a combination of research active and ex-industry teaching staff
(Comninos, McLoughlin and Anderson, 2010).
As indicated in the introduction, a key area of concern in the education of ‘work ready
graduates’ is the development of professional character and values (Hunt, 2008). A key
professional characteristic is the ability to work within teams; vital to this is an understanding
of the correct structuring of animation scene files and folders to ensure mobility to and from
the render farm. This characteristic will enable students to better work in teams as well as
working more effectively upon reaching employment (Hager and Holland, 2006, University
of Hertfordshire, 2014), hence the objective is to ensure that our students perform in this
manner as second nature.
This paper will attempt, through the use of a bespoke tool, to encourage this key
characteristic through the use of established learning theories. For the purpose of this
research, this paper will be concentrating on a combination of Behaviourism as first theorised
by Skinner (Skinner, 1938). Constructivism (Vygotskij et al., 2012) focuses on the formation
3
of behaviours through repetition and correction known as ‘conditioning’, leading the student
to outcomes measureable as either correct or incorrect (Pritchard, 2014). However, the chief
success could be the downfall of behaviourism when considered within the context of higher
education as this is not compatible with the objective of creating critical thinkers. When we
consider that this stage in the production pipeline requires students’ files be conformed to
fixed standards, we can justify the use of the method in this area with some adaptation to the
application.
Constructivism by contrast (Pritchard, 2014) deals with learning by the linking of an existing
knowledge base with problem solving skills to form new learning rather than simply
correcting the behaviour. By combining a binary response with identification of the location
of errors, whilst withholding the actual fault, we can encourage the problem solving
encouraging students to draw on past knowledge and experience to resolve their issues thus
integrating these two theories.
When providing information surrounding the faults, best practice in feedback should be
considered, ensuring that a positive learning experience is created as well as being a
technically effective tool. Chickering and Gamson (Chickering and Gamson, 1987) inform
us that a key factor in successful feedback is the timeliness of its reception, without which
students are unable to learn from their mistakes and show proof of learning. A limiting factor
when producing feedback for each scene file is that each student can have upwards of thirty
scenes, each with multiple iterations which when multiplied by the number of students makes
individual feedback impractical.
In addition to the timeliness of the feedback is the content; in an ideal world each student
would receive a personalised response which would include signposting of the types of
mistakes made. Within blended learning courses, computer marked multiple choice exams
are key to the success due to their ability to swiftly and accurately respond to large cohorts of
students (Anderson, 2008). Adapting this concept and applying it to the analysis of
animation scene files, we can begin to address some of the concerns raised in the previous
paragraph. By limiting the issues identified to those which are key areas of failure by
students, it is conceivable to create a computer based automatic feedback system.
4
METHODOLOGY:
This study will undertake an adaptation of the action based research methodology (Hamilton,
1995) structured in following four stages:
-Identifying the problem;
-Designing the solution;
-Measuring the result, and
-Refining the solution.
In best practice, this method would be applied in a loop to enable continuous refinement.
However, for the purpose of this paper, the tool will go through a single cycle due to time
constraints.
When students submit files to the render farm for processing they need to meet specific
technical requirements to ensure successful processing. The objective of this tool is to
encourage students to resolve issues by themselves through self-guided learning (Clark,
2012).
An initial tool shown in Figure 1 had the sole purpose of enabling students to see the location
of file path errors. This was written to enable the render farm to function, however it did not
offer any analysis or interpretation of these file paths; instead it relied on the student to
identify the errors. This provided students with greater technical ability with a marked
advantage over those who specialised in art heavy disciplines.
Figure 1: Existing UH Render Farm Tool prior to redesign
5
The proposed solution to the problem described will be the redesign and build of a new tool
that will provide automated formative feedback on the key concepts of file configuration.
When files are processed through the render farm, an error log is produced indicating varying
levels of success in processing student files. Upon investigation the error logs from the past
year, five key issues found were discovered:
-Utilisation of standardised project workspace
-All texture paths must reference a valid file
-No spaces in filename or file path
-Textures must be linear file types, for example no .jpeg or .gif
-Project must exist on the render farm storage.
Once completed, this tool is to be deployed as part of our existing pipeline and its
effectiveness measured by way of quantitative analysis of the render farm job database and
render farm log files by way of counting the cumulative errors generated over time. Three
different kinds of indicator will be utilised to provide feedback to students: a traffic light
indication, tick boxes and summary. Due to the methodology for measuring success in this
paper, we will be able to ascertain from the results which methods of feedback are most
successful in the live environment.
Firstly, traffic light indicators consisting of coloured boxes will be used to indicate missing or
improperly pathed textures, with red indicating missing textures and green or yellow
indicating whether the file is ready for local or farm based rendering. The second form of
feedback is shown within the Automated Quality Check Box the bottom left of the figure 2:
tick boxes were chosen instead of cross boxes to provide a form of positive re-enforcement as
suggested within behaviourist theory (Ferster and Skinner, 1957). Finally, a large “passed” or
“failed” indicator, also in the bottom left of figure 2, summarises the aforementioned tick box
tests.
6
Figure 2: Final design for UH Render Farm Tools
The data will be collected as part of the day-to-day operation of the render farm, as a passive
method of data collection (Jordan, 2014) it will not bear the possibility of being skewed by
student awareness. Initial data collection will come from a dump of the metadata from the job
database held on the queue management server. This data will then be examined to yield a
minimum threshold of jobs needed before including the student in this study using the
formula detailed below to eliminate low usage cases. The tool used by students, is triggered
at the final stages before submitting their render to the farm, which presents us with the
difficulty an unknown number of samples. Instead of relying on a sample count we will
therefore be collecting data for a fixed period of one semester.
Minimum job threshold will be calculated using the following formula:
Threshold=Totalnumber of Jobs
Number of Students
Using this threshold, the list of render jobs will then be filtered to yield a list of exemplar
students, rather than using a random sampling method, which may have resulted in users who
may have only used the farm on a few occasions. The final level of filtering will involve
removing any users who have achieved a perfect record as they will not show any change.
Once this filtering is complete, the script displayed in appendix A will code the job errors in
line with Table 1.
7
Error Detail Code
Project not Set 1
Missing Texture Files 2
No Spaces 3
Project not on R: 4
Other Errors 5
Table 1: Error Code Key
Once coded, the data will then be anonymised ensuring student identity is not revealed;
however, to generate graphs indicating any change, it will be necessary to assign each
participant with a serial number to ensure the data remains related. Graphing this data will
allow analysis of trends in the final stages of this study. In addition to the student graphs a
summary graph of job error type and their prolificacy before and after the introduction of this
tool will be produced, will be charted across the cohort.
FINDINGS:
Graphing the user based data (Appendix B.) failed to yield any significant trends for the
errors being tracked for this study, instead only indicating a rising trend of unidentified
errors. To enable this study to progress a cohort level analysis will be produced in its place.
11
176
341
506
671
836
1001
1166
1331
1496
1661
1826
1991
2156
2321
2486
2651
2816
2981
3146
3311
3476
3641
3806
3971
4136
4301
4466
4631
4796
4961
0
20
40
60
80
100
120
Error classicaon
Project Not Set
Missing Texture
Files
Spaces Error
Project not on R:
8
Figure 3: Cumulative error counts by type
Figure 3 indicates the cumulative count of errors by classification. Upon initial inspection
there are three characteristics (Blaikie, 2003) plateaus, sharp rises and lack of correlation
between error types.
Plateaus in Fig. 3 represent periods where no further errors where created of that
classification; these can be interpreted as proof that the students have learnt to avoid this
error. The “missing texture files” represented in green is consistently flat until JID 4537
where it begins increasing. This indicates that until the job specified the “traffic light” visual
indicators were successful until they suddenly began to fail. Due to such a long period of
error free submissions in this factor, further inspection of the log files to ascertain the cause
was needed. Upon further inspection, the result was proven a rogue data point, which was
caused by an old version of the tool being installed on a single machine. Once excluding this
rogue result, we can safely consider the visual traffic light indicators successfully encouraged
students to resolve this issue before submitting the file.
The sharp rises show a large block of jobs suffering the same error, at first this may seem to
be a curious artefact however when investigated it was found to be caused by a single student
submitting a series of flawed jobs in a batch. Once placed in the render queue, jobs are not
processed until they reach the front of the queue and so the error does not become apparent
until the student checks back later. This indicates an example point where receiving feedback
before submitting the job would have improved the experience of the student by resolving the
issue before waiting in the queue.
The “project not set” error failed to yield any values due to the introduction of a separate
script outside the scope of this study which forced students to set the Maya project location
before saving their files. In addition to these measureable results, anecdotal evidence
provided by students suggested that they found the visual indicators most helpful in
developing their scene files.
DISCUSSION:
After a period of adjustment, the true impact of the tool can be viewed as after JID 3000 the
majority of the errors being addressed by this tool had plateaued. When comparing the ratio
9
of successful jobs to errored jobs with the same ratio from previous years the success of the
tool was apparent by ensuring many errors were identified and resolved before they caused
issues.
Although improving students’ error rates, the result was not consistent across all users; some
seemed to fail to demonstrate any benefit from the feedback. If the tool were not focused as
a combination educational and pipeline tool, it may be considered to have failed in the respect
of protecting the render farm from error. However, in a learning context it is understandable
that not all students learn in the same way and so a system that works for some will not work
for all (Fry et al., 2015).
Through the process of developing this tool and the subsequent review of its usage, the
question of automatic correction vs. manual correction appeared. Much as the discussion
surrounding the use of automated grammar and spell checking within word
processors(Ferraro, Fichten and Barile, 2009), the question of whether we should
automatically correct student files needs to be addressed. To resolve this question, I will now
analyse the two approaches available.
The first approach would be to focus on the efficacy of the render farm, automating the
correction of student files with the objective of getting the most files through as possible.
This would also most likely have a positive student response as it would be easier to use and
therefore a better user experience. Without having to correct the errors themselves however,
the tool would no longer encourage students to cement the professional characteristics
identified in the literature review; instead it could promote sloppy behaviour as the tool
would fix all the errors automatically. The final outcome of this would be students becoming
less work ready; when reaching employment, where they would not have access to this tool,
graduates would fail to generate high quality files successfully.
An alternative approach however would be to focus solely on teaching the students what they
have done wrong and direct them towards a self-discovered solution. While this may end in
achieving the goal of creating work ready graduates, this may not be the most efficient way
of processing files, leading to a less positive user experience.
10
A reoccurring question within teaching practice is how we as teachers get students both to
read and apply feedback (Duncan, 2007). The tool developed for this paper suffers with the
same issue: although it can accurately distribute individual feedback, it is unable to force
students to apply this feedback.
The considered solution was to alter the tool to prevent the jobs from being submitted if they
fail the tests, rather than leaving students with the option of ignoring the feedback. This may
be a feasible technical solution to the problem, however constructivism when discussing how
we learn suggests that organisms learn when faced with failure without which it wouldn’t be
possible to learn (Von Glasersfeld, 2002). In this case, stopping students from submitting
files that will fail is not necessarily the best answer from the learning perspective, questioning
again the balance between the technical and learning drivers for developing this tool.
As described in the methodology, the final stage of the action methodology is refining the
solution. To encourage students to apply the feedback given, Initially, improving visibility
could be a simple solution; taking on board the success of using bright colours to indicate
success or failure of the test, we could replace the tick boxes with coloured indicators. In
addition, the introduction of a challenge dialogue that stating errors were found and
requesting the student to confirm that they wish to ignore the feedback, could encourage the
students to think twice as well as eliminate the chance that they had missed the errors.
CONCLUSION:
The use of computer generated feedback has provided measureable learning when used to
address specific outcomes, so long as these outcomes are measureable with a binary response.
Although not addressed within this paper, the use of non-binary feedback would have posed
significant technological challenges as well as posing a problem when trying to measure
using the quantitative methodology used. To improve the study, an additional measurement
point could be introduced at the point of loading the tool to enable the students’ files to be
analysed before submission and after as this would have yielded files that could not be
submitted to the render farm.
In addition to the technical solutions suggested, it has become apparent that this tool is not
the solution to all problems; instead, it is part of a larger solution. Therefore, the introduction
11
of additional in-class training will further target how students should test files before sending
them for processing on the render farm.
In conclusion the application of pedagogic theory within the design of animation pipeline
tools has been a successful, improving both the students’ learning of animation techniques
and the overall usefulness of the tool. To increase the impact of this study, it is hoped that we
can enable part of this tool to be made available as an open source tool for use by both
students and industry practitioners alike.
REFERENCES:
Anderson, T. (2008) The theory and practice of online learning. 2. ed. edn. Edmonton: AU
Press.
Blaikie, N. (2003) Analyzing quantitative data. 1. publ. edn. London u.a: Sage.
Cantor, J. and Valencia, P. (2004) Inspired 3D short film production. Boston, Mass:
Thomson Course Technology.
Chickering, A.W. and Gamson, Z.F. (1987) 'Seven Principles for Good Practice in
Undergraduate Education.', AAHE Bulletin, , pp. 3-7.
Clark, I. (2012) 'Formative Assessment: Assessment Is for Self-regulated Learning',
Educational Psychology Review, 24(2), pp. 205-249. doi: 10.1007/s10648-011-9191-6.
Comninos, P., McLoughlin, L. and Anderson, E.F. (2010) 'Educating technophile artists and
artophile technologists: A successful experiment in higher education', Computers &
Graphics, 34(6), pp. 780-790. doi: 10.1016/j.cag.2010.08.008.
Creative Skillset (2015) The value of the Creative Skillset Tick. Available at: (Accessed: .
Duncan, N. (2007) ''Feed-forward': improving students' use of tutors' comments', Assessment
& Evaluation in Higher Education, 32(3), pp. 271-283. doi: 10.1080/02602930600896498.
Ferraro, V., Fichten, C.S. and Barile, M. (2009) COMPUTER USE BY STUDENTS WITH
DISABILITIES: PERCEIVED ADVANTAGES, PROBLEMS AND SOLUTIONS. pp. 20.
Ferster, C.B. and Skinner, B.F. (1957) Schedules of reinforcement. New York, NY:
Appleton-Century-Crofts.
Fry, H., Ketteridge, S., Ketteridge, S. and Marshall, S. (2015) A Handbook for Teaching and
Learning in Higher Education: Enhancing academic practice. 4; 4; 4th edn. Hoboken: Taylor
and Francis.
Hager, P. and Holland, S. (2006) Graduate Attributes, Learning and Employability.
Dordrecht: Springer.
Hamilton, M.L. (1995) 'Relevant Readings in Action Research', Action in Teacher
Education, 16(4), pp. 79-81. doi: 10.1080/01626620.1995.10463221.
12
Hunt, J.M. (2008) Competence and character: pedagogical considerations for preparing
students to be professionals. NY, USA: ACM New York, pp. 103.
Jordan, J. (2014) Passive vs. Active: The Role of the Survey in a Big Data World. Available
at: https://blog.instant.ly/blog/2014/11/passive-vs-active-the-role-of-the-survey-in-a-big-data-
world/ (Accessed: 2016).
OED Online (2016) render, n.1. Available at: http://www.oed.com/view/Entry/162385?
redirectedFrom=render+farm (Accessed: 2016).
Pritchard, A. (2014) Ways of learning. 3. ed. edn. London [u.a.]: Routledge.
Skinner, B.F.1. (1938) The behavior of organisms : an experimental analysis. United States: .
University of Hertfordshire (2014) Graduate Attributes. Available at: http://www.herts.ac.uk/
about-us/student-charter/graduate-attributes (Accessed: 2016).
Von Glasersfeld, E. (2002) 'Learning and adaptation in the theory of constructivism', in
Smith, L. (ed.) Critical readings on Piaget, pp. 20-27.
Vygotskij, L.S., Hanfmann, E., Vakar, G. and Kozulin, A. (2012) Thought and language.
Rev. and expanded ed. edn. Cambridge, Mass: MIT Press.
13
Appendix A: Categorisation Code
This appendix contains the code used to categorise the error data collected from the farm log
files.
Language: PYTHON
import os
for root,dirs, files in os.walk("/FARMLOGSTORE/tractor-logs"):
prevJobID = ''
for file in files:
#get JobID
jobID = os.path.abspath(root).split(os.path.sep)[-1]
#only test one task per job
if jobID != prevJobID:
if file.endswith(".log"):
fileToTest = open(os.path.join(root,file)).read()
if "process complete, exit code: 0" in fileToTest:
print(jobID + ',' + os.path.join(root,file) + ',0')
elif "[texturesys] could not read file" in fileToTest or "Warning: Texture file" in
fileToTest:
print(jobID + ',' + os.path.join(root,file) + ',2')
elif "More than one file name is not allowed" in fileToTest:
print(jobID + ',' + os.path.join(root,file) + ',3')
elif "Cannot load scene" in fileToTest:
print(jobID + ',' + os.path.join(root,file) + ',4')
else:
print(jobID + ',' + os.path.join(root,file) + ',5')
prevJobID = jobID
14
Appendix B: Individual User Error Report Graphs
JID
1173
1627
1801
1863
2028
2074
2633
2639
2644
2649
2654
2659
2664
2669
3992
4137
4171
4787
4848
4853
4859
4864
4906
4923
4980
0
0.2
0.4
0.6
0.8
1
1.2
Student # 1001
return 2 return 3 return4 return 5
1
5
9
13
17
21
25
29
33
37
41
45
49
53
57
61
65
69
73
77
81
85
89
93
97
101
105
109
113
117
121
125
0
1
2
3
4
5
6
7
8
9
Student # 1002
return 2 return 3 return4 return 5
129
240
243
248
253
265
268
407
616
625
651
699
732
760
796
1001
1024
1102
1233
1238
1293
1448
1605
1614
1866
1971
1985
1996
2006
2017
3780
3895
0
0.5
1
1.5
2
2.5
3
3.5
Student # 1003
return 2 return 3 return4 return 5
15
1749
1917
3043
3201
3208
3215
3224
3234
3241
3248
3263
3273
3300
3352
3359
3366
3373
3380
3387
3394
3401
3408
3435
3442
3449
3456
3463
3471
3478
3671
0
10
20
30
40
50
60
Student # 1004
return 2 return 3 return4 return 5
297
690
694
704
719
914
917
920
1058
1062
1116
1385
1489
1505
1588
1594
2751
2757
2763
2772
2775
2778
3131
3136
3139
3143
4038
4097
4101
4183
4187
0
1
2
3
4
5
6
7
8
9
Student # 1005
return 2 return 3 return4 return 5
1458
1518
1524
1530
1536
1542
1775
2815
2822
2837
2851
2859
3116
3123
3142
3867
3889
4016
4024
4030
4107
4113
4129
4253
4259
5055
5061
5365
5379
5401
0
1
2
3
4
5
6
7
8
Student # 1006
return 2 return 3 return4 return 5
16
619
623
746
750
924
975
980
987
1034
1078
1085
1132
1138
1494
3326
3335
3339
3343
3482
3486
3490
3494
3502
3506
3510
3514
3518
3523
3531
0
2
4
6
8
10
12
Student # 1007
return 2 return 3 return4 return 5
1368
1617
1783
1984
2351
2376
2425
2483
2501
2513
2525
2566
2580
2594
2606
2620
2692
2708
2723
2830
2875
4165
4280
4339
4631
4661
4907
5112
5154
5166
5178
5363
0
5
10
15
20
25
30
Student # 1008
return 2 return 3 return4 return 5
1388
1762
1954
1966
2325
2342
2357
2867
2901
2912
2922
2932
2943
2954
2964
2974
2984
2998
3013
3025
3061
3079
3091
3166
3176
3196
3255
3276
3287
3549
5100
0
10
20
30
40
50
60
Student # 1009
return 2 return 3 return4 return 5
17
116
350
942
1168
1379
1397
1451
1552
1636
1647
1857
1893
1901
2059
2151
2163
2242
2318
4337
4372
4398
4408
4417
4448
4456
4491
4687
5333
5421
0
5
10
15
20
25
Student # 1010
return 2 return 3 return4 return 5
18
Word Count:
3563
19