You are on page 1of 8

Designing a System for Supporting the Process of

Making a Video Sequence


Shigeki Amitani Ernest Edmonds
Creativity & Cognition Studios Creativity & Cognition Studios
Australasian CRC for Interaction Design Australasian CRC for Interaction Design
University of Technology, Sydney AUSTRALIA University of Technology, Sydney AUSTRALIA
+61-(0)2-9514-4631 +61-(0)2-9514-4640
shigeki@shigekifactory.com ernest@ernestedmonds.com

ABSTRACT along with a storyboard devised in advance, artistic video


production tends to proceed more through interactions
The aim of this research is to develop a system to support
between an artist and a work, rather than following a pre-
video artists. Design rationales of software for artists should
defined storyboard.
be obtained through investigating artists' practice. In this
study, we have analysed the process of making a video Even so, artists have adopted that equipment so that they can
sequence in collaboration with an experienced video artist. present their art works. Recently, artists as well as industrial
Based on this analysis we identified design rationales for a video producers have started to use computer software for their
system to support the process of making a video sequence. A compositions. However, most video editing software has been
prototype system “Knowledge Nebula Crystallizer for Time- developed as a metaphor of the traditional editing equipment
based Information (KNC4TI)” has been developed. Further such as films and VCRs, as the general GUI operating systems
development towards a generative system is also discussed. adopted the desktop metaphor. This means that video editing
software still does not provide suitable interactive
representations for artists, whilst the editing process of
Categories and Subject Descriptors industrial video producers is different from that of artists.
H5.m. Information interfaces and presentation (e.g., HCI):
Miscellaneous. In order to understand design processes in detail, a number of
analyses, especially in architects' design processes, have been
conducted [3-5, 22], however, most of the studies focused on
General Terms design process of non-time-based information. Few analyses
Design have been conducted on those of time-based information such
as making video sequence and musical composition.
Keywords Tanaka [23] has pointed out that the problem in analyses
Video making, cognitive process, sketching, software, time- conducted so far in the musical composition research field is
based information, generative system that although generic models for musical composition
processes have been proposed based on analyses of those
processes (macroscopic models and analyses), little has been
1. INTRODUCTION conducted to investigate how each stage in those models
Artists have started to use information technologies since proceeds and how transitions between stages occurs
computers have become available (e.g. [14]). Those tools help (microscopic models and analyses). Amitani et al. [1] have
artists to break new ground from an artistic perspective. conducted a microscopic analysis on the process of musical
However, these tools are not optimally designed for artists. composition. However, few microscopic analyses have been
This paper presents: conducted on the process of making a video sequence.
From the viewpoint of human-computer interaction research,
• Results of investigation of the process of making
Shipman et al. [17] have developed a system called “Hyper-
a video sequence to identify design rationales for
Hitchcock”. This system has the required flexibility for video
a supporting system in collaboration with a
editing, but the system aims to index video clips based o n
professional video artist
detail-on-demand concept that facilitate user navigation of
• Development of a prototype system called video sequences efficiently.
“Knowledge Nebula Crystallizer for Time-based Yamamoto et al. [24] have developed ARTWare, a library of
Information (KNC4TI)” that supports the process components for building domain-oriented multimedia
based on the investigation authoring environments. A system was developed particularly
• Plans for extending the KNC4TI system to a for empirical video analysis of usability analysis.
generative system Although these systems above have been developed based o n
the design perspective, their focuses are on supporting
2. RELATED WORK navigation processes and analyses of video contents. For
In composing a video sequence, an editing tool i s authoring information artefacts, it is important to support its
indispensable. Traditionally, conventional video editing entire process, from the early stages where ideas are not clear t o
equipment has been designed for industrial video productions the final stages where a concrete work is produced.
which are different to those of artists. While industrial video
production needs tools to organise a certain video sequence
Shibata [15, 16] have claimed the importance of an integrated a part, and between conceptual and represented world will
environment that supports entire process of creative activities, support this design process.
as the process is composed of sub processes (e.g. the
generating process and the exploring process [8]) and they are 3.1 Roles of Sketching
inseparable. In our study, we also regard this concept of Sketching plays significant roles that existing software does
integration important and implement to realise the integration. not cover. Sketching allows designers to:

3. A CASE STUDY • Externalise designer's multiple viewpoints


We have investigated the process of making a video sequence simultaneously with written and diagrammatic
to identify design rationales for development of a video annotations
authoring tool that fits to designers’ cognitive processes. It • Visualise relationships between the viewpoints
was a collaborative work with an experienced video artist (we that the designer defines
call the artist “participant” in this paper). As the participant
originally had a plan to compose a video clip, we could In the following sections we discuss how these two features
observe a quasi-natural process of making a video sequence. work in the process of making a video sequence.

Retrospective reporting of protocol analysis [7], 3.1.1 Written and diagrammatic annotations for
questionnaires and interviewing methods were analysed. designers' multiple viewpoints
Overall tendencies are summarised as below: Figure 1 shows the participant's sketch. Each of the six
• Conceptual works such as considering whole horizontal rectangles in Figure 1 represents the entire material
structure of a piece, semantic segmentation of a video. They all refer to one same video sequence with different
material movie are conducted on the participant’s labels so that he can plan what should be done regarding t o
sketch book each element that he decided to label.
From the top to the bottom they are labelled as follows (shown
• Software is mainly used for: in (1) Figure 2):
o Observing what is really going on in a Movements: movements of physical objects such as
material video sequence person coming in, a door opening, etc.
o Implementing the result of his thoughts in Sound levels: change of sound volume
his sketch in response to what is seen o n
the software Sound image: types of sounds (e.g. "voices", etc.)
The analysis shows that conceptual design processes are Pic (= picture) level: change of density of the image
separated from implementation processes, while they cannot Pic image: types of the images
be separated with each other. Design process is regarded as a
"dialog" between the designer and his/her material [13]. Compounded movements: plans
Facilitating designers going back and forth between whole and

Figure 1 The participants' sketch


These elements are visualised in the sketch based on the
(1) Relationships between sound and vision (3) Relationships across
timeline conventions. Although some of the existing video the viewpoints
authoring tools present sound level with timelines as the
second rectangle shows, existing software allows limited
written annotations on a video sequence, and eventually i t
does not provide sufficient functionality for externalising
multiple viewpoints. Especially the top sequence labelled as
"movement" is conceptually important in making a video
sequence, which is not supported by any video authoring
tools. As (1) in Figure 2 shows, a mixture of written and non- (2) Relationships between
diagrammatic annotations works for analysing what is going vision and plan
on in the material sequence.

(1) Designer’s own annotations

Figure 3: Relationships between multiple viewpoints


Sketching also provides a holistic view of time-based
information. Implementation of these features of sketching t o
software will facilitate designers going back and forth between
(2) Same object with conceptual and physical world, and whole and a part, so that
different annotations
the process of making a video sequence is supported.

3.2 Roles of Software


We investigated the process of making a video sequence with
software. The participant was to edit a material video sequence
composed of a single shot. The editing tool he used was
FinalCut Pro HD which the participant had been using for
about five years. The duration was up to the participant
Figure 2: Annotations (eventually it was 90 minutes). The video editing was
3.1.2 Visualising relationships between multiple conducted at studio at Creativity & Cognition Studios,
University of Technology, Sydney. It was the first time he
viewpoints engaged himself in the piece. That is, the process was the
As shown (2) in Figure 2, a certain part of the material earliest stage of using video-authoring software for the new
sequence is annotated in different ways in each rectangle i n piece.
order to describe conditions represented by the rectangle, that
is: speak; zero (with shading); null; black; T (or a T-shaped The process of making a video sequence was recorded b y
symbol); meter. This is a power of multiple viewpoints with digital video cameras. Following elements were recorded:
written annotations. • The participant's physical actions during making
These annotations are explanations of a certain part of a video a video sequence on the video editing software
sequence in terms of each correspondent viewpoint. For
example, in terms of "Sound levels", this sketch shows that the • The participant's actions on computer displays
sound level will be set to zero at this point of the sequence. After authoring a video sequence, the participant was asked t o
The participant also externalises the relationships across the conduct a retrospective report on his authoring process with
viewpoints in his sketch by using both written and watching the recorded video data. We adopted the
diagrammatic annotations as shown in Figure 3. retrospective report method so that we can excerpt cognitive
processes in actual interactions as possible as we can. The
Sketching supports designers to think semantic relationships recorded video data was used as a visual aid to minimise
such as "voices leads pics" shown in (1) of Figure 3, as well as memory of the participant  [22]. The participant was also
relationships among physical features such as timing between asked to report what he thought during editing with watching
sounds and pictures. the recorded video data. Following this the participant was
(2) indicates that he visualised the relationships between asked to answer a free-form questionnaire via e-mail.
picture images and his plan on a certain part of the material
sequence by using written and diagrammatic annotations. 3.2.1 Observing "facts" in a material sequence
The participant reported that he was just looking at the film
(3) shows that he was thinking about the relationships across clip as follows:
the viewpoints.
[00:01:30] At this stage, I'm just looking again a t
The relationships that the participant visualised are both the film clip.
physical and semantic. Some authoring tools support
visualising physical relationships, however, they have few [00:02:32] So again, still operating on this kind o f
functions to support semantic relationships among looking at the image over, the perceptual thing.
viewpoints of designers. Sketching assists this process.
This was reported for 18 times in his protocol data. These and sequences used for their current video compositions (this
observations occurred at the early and late phase of the process is called a shot list).
as shown in Figure 4. The list function in a video editing tool supports to compare
multiple alternatives. It allows a designer to list not only files
Frequency of to be potentially used but also created sequences. In the
observation
retrospective report, the participant said:
[00:11:10] It would be a kind of parallel process
where you make a shot list is causing what they call
a shot list [pointing at the left most list-type window
in FinalCut]. And essentially you go through on the
list, the different shots, the different scenes as we
would often call, um. Whereas I'm just working with
one scene, dynamics within one scene. So, I'm
working with a different kind of material, but it's
related too.
Time
10 20 30 40 50 60 70 80 90 (mins.) Although this function helps designers conducting trial-and-
error processes through comparing multiple possibilities, the
Figure 4 The time distribution of observation process participant mentioned a problem:
The observation of facts was 75 minutes, and the exploration [ A - 4 ] Film dubbing interface metaphor [is
of possibilities was 5 minutes, the rest was for other events inconvenient]. The assumption is that a TV program
such as reading a manual to solve technical problems and or a cinema film is being made, which forces the
talking to a person. adoption of the system to other modes. For instance,
In this observation process, it was also observed that the why should there be only one Timeline screen? There
participant was trying to find a "rhythm" in the material are many instances where the moving image is
sequence which the participant calls "metre". presented across many screens.
[00:02:23] One of the things I've been thinking Existing video editing software has adopted a metaphor of the
about ... is actually to, is actually well, what is the tools used in industrial film making process. As a result, the
kind of metre, what is the rhythm that you are going software presents only the time axis in the sequence currently
to introduce into here composed. This problem was also reported in the context of
This type of observation is for checking the actual duration for musical composition [1].
each scene that the participant considered as "a semantic
chunk". 3.3 Identified Design Rationales
Three design rationales have been identified based on our
The participant recorded precise time durations of the semantic analysis.
chunks and listed them in his sketchbook. This means that the
participant was trying to refine his idea by mapping • Allowing seamless transition between a
conceptual elements to a physical feature. conceptual holistic viewpoint (overview) and
[00:08:32] It's a matter of analysing each, almost a partial implementation of the concepts
each frame to see what's going on and making a (detail)
decision of. Having kind of analyse what's going o n • Visualising multiple viewpoints and
and making a decision of, well therefore this timelines
duration works out of something like this. The
durations are in seconds and frames, so that [...] 2 0 • Enhancing trial-and-error processes
unit [...]. It counts from 1 to 24 frames, 25th frame These three points are not exclusive with each other, we tried
rolls over number second. to separate them in order to facilitate to implement a system
In the process of making a video sequence, the software plays a based on the knowledge obtained through this study, and t o
role of elaborating what the participant decided roughly in his contribute to more generic design theory for creativity support
sketch. This process has features listed below: tools.

• Transitions from macroscopic viewpoints 3.3.1 Allowing Seamless Transition between


appeared in his sketch to microscopic actions Overview and Detail Representations
such as focusing on time durations were The process of making a video sequence especially in an
frequently observed artistic context is a design process which has hermeneutical
• Almost no transition in the opposite direction feature, that is, the whole defines a meaning of a part and at the
was observed, such as seeing the effect of same time meanings of parts decide the meaning of the whole
microscopic changes on the entire concept [20]. So a video authoring tool should be designed to support
this transition between whole and a part.
3.2.2 Trial-and-error processes Although the overview + detail concept is a generic design
Video authoring software supports trial-and-error processes rationale applicable to various kinds of design problems, we
with "the shot list" as well as the "undo" function. Existing consider that this is a very important strategy for the process
video editing software usually allows designers to list files of creating time-based information, because time-based
information takes a form which is difficult to have an overview
by nature. For example, in order to see the effect on the whole Instead of the list representation, a spatial representation i s
caused by a partial change, you have to watch and/or listen t o more suitable to this kind of information-intensive tasks [12].
the sequence through from the beginning to the end, or you While this comparison has been conducted in a designer's
have to memorise the whole and to imagine what impact the mind, externalisation of a designer's mental space is helpful
partial change has on the whole. In an architectural design, the for deciding whether an information piece is used or not. Shoji
effects of partial changes are immediately visualised on the et al. have investigated differences between a list
sketch, which makes it easy for designers to transit between representation and a spatial representation. They found that a
whole and a part. This transition should be supported in the spatial representation contributes to elaborating concepts
process of making a video sequence. better than a list representation. We believe that spatial
The participant first conducted a conceptual design in the representations will facilitate a designer to compare multiple
sketching process by overviewing the whole with using possibilities [19].
written and diagrammatic annotations to articulate In the next section, a prototype system for supporting the
relationships among the annotated elements. Then, the process of making a video sequence with spatial
participant proceeds to detail implementation of the video representations is presented.
sequence on the software, which was a one-way process. As the
conceptual design of the whole is inseparable from the 4. KNOWLEDGE NEBULA CRYSTALLIZER
detailed implementation on the software, they should be
seamlessly connected.
FOR TIME-BASED INFORMATION
“Knowledge Nebula Crystallizer (KNC)” has been originally
The reason why this one-way transition occurs may be suggested by Hori et al. [10] as a prototype knowledge
partially because this is the early stage of the process of management system which has a repository called “knowledge
making a video sequence. However, we consider that this i s nebula”. The knowledge nebula is an unstructured collection of
because tools for conceptual design (sketch) and small information pieces. Essential operations of the KNC
implementation (software) are completely separated. This
system are crystallization and liquidization. During the
causes that a designer does not modify a sketch once it i s
completed. This phenomenon was observed in the study o n crystallization, information pieces from the nebula are selected
musical composition process [1]. It was observed that and structured according to a particular context, resulting in a
comparison between multiple possibilities occurred by new information artefact. During the liquidization, an
providing an overview with the traditional score-metaphor information artefact is segmented into elements that are added
interface. It is expected that providing an overview supports to the knowledge nebula.
comparisons between multiple possibilities derived from
The Knowledge Nebula Crystallizer for Time-based Information
partial modifications.
has been developed with Java 1.4.2 on Mac OS X platform.
3.3.2 Visualising Multiple Viewpoints and Figure 5 shows a snapshot of the KNC4TI system.
Timelines OverviewEditor ElementViewer
It was observed that the participant visualise multiple
viewpoints and timelines, however, existing software presents
only one timeline.
Amitani et al. [1] have claimed that a musical composition
process does not always proceed along with the timeline of a
musical piece based on their experiment. They also claimed the Grouping
importance of presenting multiple timelines in musical
composition. Some musicians compose a musical piece along
with its timeline, however, we consider that tools should be
designed to support the both cases. This is applicable to the
process of making a video sequence.

3.3.3 Enhancing Trial-and-Error Processes


As mentioned before, a shot list helps designers to understand
relationships among sequences. In the questionnaire, he
described how he uses the list: : double click DetailEditor
[A-5] Selecting short segments into a sequence o n
the Timeline, to begin testing noted possibilities Figure 5 A Snapshot of the KNC4TI System
with actual practice and their outcomes. The interface part of the KNC4TI system is composed of: (1)
Although the shot list helps designers to some degree, the list OverviewEditor; (2) DetailEditor; (3) ElementViewer; and (4)
representation only allows designers to sort the listed ElementEditor. For a practical reason, we have adopted
materials along with an axis such as alphabetical order. This i s FinalCut Pro HD as the ElementEditor. The reason is described
useful, however, designers cannot arrange them along with in this section.
their own semantic ways. It makes it difficult for designers t o
grab relationships between files and sequences, so that the list 4.1 OverviewEditor
representation prevents designers from full exploration of
OverviewEditor provides, as its name says, the overview of
multiple possibilities.
what movie objects are available at hand. They are added b y
either choosing a folder that contains movie objects to be
potentially used or by drag & drop movie files into the 4.2 DetailEditor
OverviewEditor. Figure 6 shows the snapshot of the
The DetailEditor appears by double clicking a group on the
OverviewEditor.
OverviewEditor. The DetailEditor shows only the grouped
Each object has its thumbnails on itself in order for a designer objects in the clicked group as shown in Figure 5. Figure 7
to grab what the movie is about, in addition to its file name. shows a snapshot of the DetailEditor.
When a movie object is double-clicked, the ElementViewer
pops up and the correspondent movie file is played so that a
designer can check the contents (right in Figure 5). The
Grouped objects
ElementViewer is a simple QuickTime-based viewer. It plays a
selected movie object on demand.

Movie object Grouping by the user


Player

Figure 7 DetailEditor
In the DetailEditor, the horizontal axis is a time line and
vertical axis is similar to tracks. It plays the grouped movies
from left to right. If two objects are horizontally overlapped as
Figure 7 shows, then the first movie (Hatayoku.mpg) is played
first, then in the middle of the movie the next one
(Impulse.mpg) is played. The timing when to switch from the
first movie to the second one is defined by the following rule:
Movie 1 has a time duration d1 and represented as a
rectangle with width l 1 pixel, is located at x = x1.
A comment by the user Movie 2 has a duration d2 and represented as a
rectangle with width l 2 pixel, is located at x = x2
(Figure 8).
Figure 6 OverviewEditor When the play button is pushed, play the movie 1 i n
The shot list in the FinalCut Pro HD is a similar component t o the ElementViewer and after time t 1, play the second
the OverviewEditor in a sense as available movie objects are movie. The playing duration t 1 is defined by the
listed in the shot list, however, following expected equation (1) in Figure 8.
interactions are advantages of adopting a spatial
representation: l1 d1(x2-x1) … (1)
t1 =
Rearrange the positions of the objects While a list l1
Object 1 duration = d1
representation provides designers with a mechanically sorted
file list, a two-dimensional allows designers to arrange movie duration = d2
t1
objects along with their viewpoints. For example, movie files Object 2
that might be used in a certain video work can be arranged
l2
close so that the designer can incrementally formalise his or
her ideas about the video piece [18]. x1 x2 x
Annotations An annotation box appears by drag & drop in a Figure 8 The Timing Rule for Playing Overlaps
blank space in the OverviewEditor. A designer can put
annotations and can freely arrange it wherever he or she wants Along this rule, movie objects grouped into the DetailEditor
on the OverviewEditor. This is an enhancement of written are played from right to left in the ElementViewer. This
annotation function. facilitates designers to quickly check how a certain transition
from one file to another looks like.
Grouping A designer can explicitly group movie objects o n
the OverviewEditor. Grouped movie objects are moved as a Designers can open as many DetailEditors as they wish so that
group. Objects are addable to and removable from a group at they can compare and explorer multiple possibilities.
anytime by drag & drop.
4.3 ElementEditor: Seamless connection
Copy & Paste A movie object does not always belong to only
one group when a designer is exploring possibilities of what with FinalCut Pro HD
kind of combinations is good for a certain video work. To Starting from the OverviewEditor, a designer narrows down his
facilitate this process, copy & paste function was focus with the DetailEditor and the ElementViewer, then the
implemented. Whilst only one possibility can be explored in a designer needs to work on the video piece more precisely. For
timeline representation and a shot list on a normal video this aim, we adopted the FinalCut Pro as the ElementEditor and
editing software, it visually allows a designer to examine the KNC4TI system is seamlessly connected with FinalCut Pro
multiple possibilities. HD via XML.
The FinalCut Pro HD provides importing and exporting
functions for .xml files of video sequence information. The
DetailEditor also exports / imports .xml files formatted i n the overview of a movie file space, the way of arranging
Final Cut Pro XML Interchange Format [2] by double clicking objects in a two-dimensional space is critical. Sugimoto et al.
any point on a DetailEditor. An exported XML file by the [21] have proved statistically that similarity based
DetailEditor is automatically fed to the FinalCut Pro HD. Figure arrangement works better than random arrangement in the
comprehension of information indicated in a two-dimensional
9 shows the linkage between the DetailEditor and FinalCut.
space. That is, the DCB potentially has an ability to help a
designer to understand an information space. Movie objects
Double click
are arranged based on the similarities computed by the DCB.
on DetailEditor

A Generative System

Generation

Feedback

Information
artefacts Interaction
Such as: - Artist
Possible information artefacts
- Texts - Information Designer
(work as stimulants)
- Videos - Public / active audience
- Images Such as:
- etc. - Multimedia composite
- Web page
- Document
Final Outcome - etc.

The output becomes an input for the next loop


Figure 9 Linkage between the DetailEditor and FinalCut
through XML import/export
Using FinalCut Pro is advantageous because of the following Figure 10 How a Generative System Work
reasons: First, it increases practicality. One of the most Similarities between movies are computed based on physical
difficult things in applying a new system to practice is that features, such as brightness and hue. While the arrangement i s
practitioners are reluctant to change their tool. And FinalCut conducted by the system, it does not necessarily fit to a
Pro is one of the most used video authoring tools. That is, designer’s context [11]. So the system should allow end-user
potentially the KNC4TI could be used as an extension of the modification [9] for incremental formalisation of information
existing video authoring environment. artefacts [18]. The DCB is reconfigured through interactions
Second, it reduces development loads. It is not efficient t o such as rearranging, grouping and annotating objects. If two
develop a system that beats, or at least has the same quality as, objects are grouped together by a designer, then the DCB
a well-developed system such as FinalCut Pro. We are not computes their similarity again (Figure 11) so that the
denying the existing sophisticated systems, but extending similarity definition becomes more contextually suitable.
what they can do for human designers.
Dynamic Concept Base
5. TOWARDS A GENERATIVE VIDEO ?? ? ?? ??? ?? ?????? ???? ?????? ? ??

AUTHORING SYSTEM
?? 1 0.0958930.258199 0.0912870.062622 0 0 0 0 0
? 0.095893 1 0.371391 0.19696 0.090075 0.227429 0 0 0 0
?? 0.2581990.371391 1 0.3535530.242536 0 0 0 0 0
??? 0.091287 0.19696 0.353553 1 0.085749 0 0 0 0 0
?? 0.0626220.0900750.242536 0.085749 1 0 0 0.0393440.171499 0
?????? 0 0.227429 0 0 0 1 0.408248 0 0 0
???? 0 0 0 0 0 0.408248 1 0 0 0

Edmonds has suggested that a computer can certainly be a Annotating


??????
?
??
???
0
0
0
0
0
0
0 0.092848
0
0
0
0
0 0.039344
0 0.171499
0
0
0
0
0
0
0
0
0 1 0.057354 0.162221
0 0.057354 1 0.353553
0 0.1622210.353553
0 0.4055540.176777
1
0.5

stimulant for human creative activities [6]. The important


question is how we can design a computer system that
supports people to increase their capacities to take effective comment
and creative actions. We are currently developing components user
that extend the current system to a generative system that
Restructured Dynamic Concept Base
stimulates designers' thinking. Grouping Original similarity matrix
?? ? ?? ??? ?? ?????? ???? ?????? ? ??

Figure 10 shows the model of the generative systems. First,


?? 1 0.095893 0.2581990.0912870.062622 0 0 0 0 0
? 0.095893 1 0.371391 0.19696 0.090075 0.227429 0 0 0 0
?? 0.2581990.371391 1 0.3535530.242536 0 0 0 0 0
??? 0.091287 0.19696 0.353553 1 0.085749 0 0 0 0 0
?? ? ?? ??? ?? ?????? ???? ?????? ? ??

information artefacts (existing ones and/or new pieces of


?? 0.0626220.090075 0.2425360.085749 1 0 0 0.0393440.171499 0
?? 1 0.0958930.258199 0.0912870.062622 0 0 0 0 0
?????? 0 0.227429 0 0 0 1 0.408248 0 0 0
? 0.095893 1 0.371391 0.19696 0.090075 0.227429 0 0 0 0
???? 0 0 0 0 0 0.408248 1 0 0 0
?? 0.2581990.371391 1 0.3535530.242536 0 0 0 0 0
?????? 0 0 0 0 0.039344 0 0 1 0.057354 0.162221
??? 0.091287 0.19696 0.353553 1 0.085749 0 0 0 0 0
? 0 0 0 0 0.171499 0 0 0.057354 1 0.353553

information) are collected and stored (left in Figure 10). A


?? 0.0626220.0900750.242536 0.085749 1 0 0 0.0393440.171499 0
?? 0 0 0 0 0 0 0 0.1622210.353553 1
?????? 0 0.227429 0 0 0 1 0.408248 0 0 0
??? 0 0.092848 0 0 0 0 0 0.4055540.176777 0.5
???? 0 0 0 0 0 0.408248 1 0 0 0
?????? 0 0 0 0 0.039344 0 0 1 0.057354 0.162221
? 0 0 0 0 0.171499 0 0 0.057354 1 0.353553

system (top in Figure 10) generates possible information


?? 0 0 0 0 0 0 0 0.1622210.353553 1
??? 0 0.092848 0 0 0 0 0 0.4055540.176777 0.5

New matrix to be reconfigured


artefacts (right in Figure 10). These outputs work in two ways:
(1) as final products that a user can enjoy; (2) as draft materials
that a user can modify (at the centre of Figure 10). Figure 11 Reconfiguring the DCB through Interactions
In order to deliver possible information artefacts to users, a
component called a Dynamic Concept Base (DCB) is being
6. CONCLUSION
developed. It is a concept base that holds multiple similarity In this paper, we presented: (1) the analysis of the process of
definition matrices which are dynamically reconfigured making a video sequence to identify design requirements for a
through interactions. The more the number of objects supporting system; (2) The developed system based on this
increases, the more difficult to grab the relationships on the analysis; and (3) plans for a generative system.
physically limited display. In order to assist a designer to grab
Design rationales for an appropriate video-authoring tool Liquidization and Crystallization. Journal of
derived from our investigation which are summarised as three Universal Computer Science, 10 (3). 252-261. 2004
inter-related features: (1) Allowing seamless transition [11] Kasahara, K., Matsuzawa, K., Ishikawa, T. and Kawaoka, T.
between a conceptual holistic viewpoint (overview) and a Viewpoint-Based Measurement of Semantic
partial implementation of the concepts (detail); (2) visualising Similarity between Words. Journal of Information
multiple viewpoints and timelines; and (3) enhancing trial- Processing Society of Japan, 35 (3). 505-509. 1994
and-error processes. A prototype system "Knowledge Nebula [12] Marshall, C. and Shipman, F. Spatial Hypertext:
Crystallizer for Time-based Information (KNC4TI)" i s Designing for Change. Communications of the ACM,
developed based on this analysis. 38 (8). 88-97. 1995
We are going to evaluate the system through user studies and [13] Schoen, D.A. The Reflective Practitioner: How
also implement the generative function. Professionals Think in Action. Basic Books, New
York, 1983.
7. ACKNOWLEDGMENTS [14] Scrivener, S. and Edmonds, E., The computer as an aid to
the investigation of art exploration. in Proceedings
This project was supported by Japan Society of the Promotion
of Euro IFIP, Amsterdam, Netherlands (1979), North-
for Science, and is supported by the Australasian CRC for
Holland Publishing Company, 483-490.
Interaction Design and Australian Centre for the Moving
[15] Shibata, H. and Hori, K. A Framework to Support Writing
Image. The authors are also grateful to Dr. Linda Candy and Mr.
as Design. Journal of Information Processing
Mike Leggett for their useful comments for improving our
Society Japan, 44 (3). 1000-1012. 2003
research.
[16] Shibata, H. and Hori, K., Toward an integrated
environment for writing. in Proceedings of
8. REFERENCES Workshop on Chance Discovery European
[1] Amitani, S. and Hori, K., Supporting Musical Conference on Artificial Intelligence (ECAI), (2004).
Composition by Externalizing the Composer's [17] Shipman, F., Girgensohn, A. and Wilcox, L. Hyper-
Mental Space. in Proceedings of Creativity & Hitchcock: Towards the Easy Authoring of
Cognition 4, Loughborough University, Interactive Video. Proceedings of INTERACT 2003.
Loughborough, 13-16 October, (2002), 165-172. 33-40. 2003
[2] Apple, I. Final Cut Pro XML Interchange Format. [18] Shipman, F.M. and McCall, R.J. Incremental formalization
[3] Bilda, Z. and Gero, J. Analysis of a Blindfolded Architect's with the hyper-object substrate. ACM Transactions
Design Session 3rd International Conference on on Information Systems, 17 (2). 199-227. 1999
VISUAL AND SPATIAL REASONING IN DESIGN, 22-23 [19] Shoji, H. and Hori, K. S-Conart: an interaction method that
July 2004, MIT, Cambridge, USA, 2004. facilitates concept articulation in shopping online.
[4] Cross, N., Christiaans, H. and Dorst, K. Analysing Design AI & Society, Social Intelligence Design for
Activity. John Wiley & Sons, 1997. Mediated Communication, 19 (1). 65-83. 2005
[5] Eckert, C., Blackwell, A., Stacey, M. and Earl, C. Sketching [20] Snodgrass, A. and Coyne, R. Is Designing Hermeneutical?
Across Design Domains 3rd International Architectural Theory Review, 1 (1). 65-97. 1997
Conference on Visual and Spatial Reasoning in [21] Sugimoto, M., Hori, K. and Ohsuga, S. An Application of
Design, MIT, Cambridge, USA, 2004. Concept Formation Support System to Design
[6] Edmonds, E., Artists augmented by agents (invited Problems and a Model of Concept Formation Process.
speech). in Proceedings of the 5th international Journal of Japanese Society for Artificial
conference on Intelligent user interfaces, New Intelligence, 8 (5). 39-46. 1993
Orleans, Louisiana, United States (2000), 68-73. [22] Suwa, M., Purcell, T. and Gero, J. Macroscopic analysis of
[7] Ericsson, A. and Simon, H. Protocol Analysis: Verbal design processes based on a scheme for coding
Reports as Data. Cambridge, MA: MIT Press, 1993. designers' cognitive actions. Design Studies, 19 (4).
[8] Finke, R.A., Ward, T.B. and Smith, S.M. Creative 455-483. 1998
Cognition -Theory, Research, and Applications. A [23] Tanaka, Y. Musical Composition as a Creative Cognition
Bradford Book The MIT Press, 1992. Process (in Japanese). Report of Cultural Sciences
[9] Fischer, G. and Girgensohn, A. End-user modifiability in Faculty of Tokyo Metropolitan University, 307 (41).
design environments. Proceedings of the SIGCHI 51-71. 2000
conference on Human factors in computing systems: [24] Yamamoto, Y., Nakakoji, K. and Aoki, A., Visual
Empowering people, Seattle, Washington, United Interaction Design for Tools to Think with:
States. 183-192. 1990 Interactive Systems for Designing Linear
[10] Hori, K., Nakakoji, K., Yamamoto, Y. and Ostwald, J. Information. in Proceedings of the Working
Organic Perspectives of Knowledge Management: Conference on Advanced Visual Interfaces
Knowledge Evolution through a Cycle of Knowledge (AVI2002), Torento, Italy (2002), ACM Press, 367-
372.

You might also like