You are on page 1of 184

JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Vol. 11 No. 2
December 2017

EDITURA UNIVERSITARĂ
Bucureşti
Foreword
Welcome to the Journal of Information Systems & Operations
Management (ISSN 1843-4711; IDB indexation: ProQuest, REPEC,
EBSCO, COPERNICUS). This journal is an open access journal
published two times a year by the Romanian-American University.
The published articles focus on IT&C and belong to national and
international researchers, professors who want to share their results of
research, to share ideas, to speak about their expertise and Ph.D.
students who want to improve their knowledge, to present their
emerging doctoral research.
Being a challenging and a favorable medium for scientific discussions,
all the issues of the journal contain articles dealing with current issues
from computer science, economics, management, IT&C, etc.
Furthermore, JISOM encourages the cross-disciplinary research of
national and international researchers and welcomes the contributions
which give a special “touch and flavor” to the mentioned fields. Each
article undergoes a double-blind review from an internationally and
nationally recognized pool of reviewers.
JISOM thanks all the authors who contributed to this journal by
submitting their work to be published, and also thanks to all reviewers
who helped and spared their valuable time in reviewing and evaluating
the manuscripts.
Last but not least, JISOM aims at being one of the distinguished
journals in the mentioned fields.
Looking forward to receiving your contributions,
Best Wishes
Virgil Chichernea, Ph.D.
Editor-in-Chief
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

GENERAL MANAGER
Professor Ovidiu Folcut

EDITOR IN CHIEF
Professor Virgil Chichernea

MANAGING EDITORS
Professor George Carutasu
Lecturer Gabriel Eugen Garais

EDITORIAL BOARD

Academician Gheorghe Păun Romanian Academy


Academician Mircea Stelian Petrescu Romanian Academy
Professor Eduard Radaceanu Romanian Technical Academy
Professor Pauline Cushman James Madison University, U.S.A.
Professor Ramon Mata-Toledo James Madison University, U.S.A.
Professor Allan Berg University of Dallas, U.S.A.
Professor Kent Zimmerman James Madison University, U.S.A.
Professor Traian Muntean Universite Aix–Marseille II, FRANCE
Associate. Professor Susan Kruc James Madison University, U.S.A.
Associate Professor Mihaela Paun Louisiana Tech University, U.S.A.
Professor Cornelia Botezatu Romanian-American University
Professor Ion Ivan Academy of Economic Studies
Professor Radu Şerban Academy of Economic Studies
Professor Ion Smeureanu Academy of Economic Studies
Professor Floarea Năstase Academy of Economic Studies
Professor Sergiu Iliescu University “Politehnica” Bucharest
Professor Victor Patriciu National Technical Defence University
Professor Lucia Rusu University “Babes-Bolyai” Cluj Napoca
Associate Professor Sanda Micula University “Babes-Bolyai” Cluj Napoca
Associate Professor Ion Bucur University “Politehnica” Bucharest
Professor Costin Boiangiu University “Politehnica” Bucharest
Associate Professor Irina Fagarasanu University “Politehnica” Bucharest
Professor Viorel Marinescu Technical Civil Engineering Bucharest
Associate Professor George Carutasu Romanian-American University
Associate Professor Cristina Coculescu Romanian-American University
Associate Professor Daniela Crisan Romanian-American University
Associate Professor Alexandru Tabusca Romanian-American University
Associate Professor Alexandru Pirjan Romanian-American University
Lecturer Gabriel Eugen Garais Romanian-American University

Senior Staff Text Processing:


Lecturer Justina Lavinia Stănică Romanian-American University
Lecturer Mariana Coancă Romanian-American University
PhD. student Dragos-Paul Pop Academy of Economic Studies
JISOM journal details 2017

No. Item Value


1 Category 2010 (by CNCSIS) B+
2 CNCSIS Code 844
JOURNAL OF INFORMATION
3 Complete title / IDB title SYSTEMS & OPERATIONS
MANAGEMENT
4 ISSN (print and/or electronic) 1843-4711
5 Frequency SEMESTRIAL
Journal website (direct link to journal
6 JISOM.RAU.RO
section)
PROQUEST

EBSCO
IDB indexation (direct link to journal
7
section / search interface)
REPEC

GALE Cengage Learning

Contact

First name and last name Virgil CHICHERNEA, PhD


Professor

Phone +4-0729-140815 | +4-021-2029513

E-mail chichernea.virgil@profesor.rau.ro
vchichernea@gmail.com

ISSN: 1843-4711
The Proceedings of Journal ISOM Vol. 11 No. 2
CONTENTS
Costin-Anton Boiangiu MRC – THE PROPOSED DOCUMENT IMAGE 219
Luiza Grigoraş COMPRESSION SCHEME
Goran Ćorluka SEASONAL CONCENTRATION OF TOURISM 232
Ana Vukušić IN CROATIA
Sanda Micula APPLICATIONS AND COMPUTER SIMULATIONS 243
Rodica Sobolu OF MARKOV CHAINS
Paz San Segundo Manuel THE DIGITAL UNIVERSITY: INFORMATION 254
SECURITY AND TRANSPARENCY
Iuliana Andreea Sicaru A SURVEY ON AUGMENTED REALITY 263
Ciprian Gabriel Ciocianu
Costin-Anton Boiangiu
Alexandra M. I. Corbea (Florea) SYSTEM ANALYSIS OF ROMANIA’S INTRADAY 280
Adina Uţă ENERGY MARKET
Dana-Mihaela Petroşanu SOLUTIONS FOR IMPLEMENTING THE 293
Alexandru Pîrjan N-BODY SIMULATION ON THE PASCAL
COMPUTE UNIFIED DEVICE ARCHITECTURE
Manolis Kritikos THE ELIMINATION METHOD OF FOURIER- 305
Dimitrios Kallivokas MOTZKIN IN LINEAR PROGRAMMING
Oana Mihaela Văcaru THE STATE AND DYNAMICS OF ECOECONOMY IN 310
Cristina Teodora Bălăceanu ROMANIA. REMARKS AND PERSPECTIVES
Mihaela Gruiescu
Stefan Prajica UNIVERSE SIMULATOR 318
Costin-Anton Boiangiu
Mariana Coancă FEATURES OF SMART LEARNING 328
Alin Zamfiroiu BEHAVIOR CHARACTERISTICS OF MOBILE WEB 338
Carmen Rotună APPLICATIONS AUTHENTICATED USERS
Catalin Ispas AN IMAGE COMPRESSION SCHEME BASED ON 350
Costin-Anton Boiangiu LAPLACIAN PYRAMIDS
Camelia Slave USING SOFTWARE PACKAGES TO ANALYZE THE 359
Mariana Coanca VULNERABILITY OF CULTURAL HERITAGE
BUILDING
Mihai Alexandru Botezatu INTERDEPENDENCE BETWEEN 369
Claudiu Pirnau E-GOVERNANCE AND KNOWLEDGE-BASED
Radu Mircea Carp Ciocardia ECONOMY SPECIFIC FACTORS
Alexandra Perju-Mitran AGE DIFFERENCES IN RESPONSES TO 385
Andreea-Elisabeta Budacia MARKETING COMMUNICATION TECHNIQUES
USED IN ONLINE SOCIAL NETWORKS
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

MRC – THE PROPOSED DOCUMENT IMAGE


COMPRESSION SCHEME

Costin-Anton Boiangiu 1*
Luiza Grigoraş 2

ABSTRACT

In this part we propose a new MRC compression scheme, based on k-means clustering for
image decomposition into layers and on image interpolation and resampling for filling in
sparse layers. This scheme also uses JPEG2000 for the actual compression of the
foreground and background layers and JBIG2 for mask compression. Additionally, more
than 30 resampling filter functions representing three main filter families (polynomial,
exponential and windowed-sinc) have been implemented and their effects on this
compression scheme analyzed. From the best ones selected, several conclusions and
recommendations have been derived, based on an image quality analysis using PSNR and
OCR mean text confidence.

KEYWORDS: MRC, Document Compression, Image Compression, Data Compression,


Image Processing, OCR, Resampling Filters.

THE PROPOSED COMPRESSION SCHEME

Among the aims of developing this MRC-based compression algorithm were the following:
to obtain the best compression rate possible, based on optimal decomposition of an image
into overlapping layers; to reduce or even eliminate artifacts introduced by digital devices
used to obtain the image or image irregularities coming from the page that was scanned (the
case of old, more degraded books); to increase clarity of text against the background,
enhancing readability and the number of OCR successful character recognitions. The paper
at hand continues the work presented by Boiangiu and Grigoraş (2017).
The steps of the proposed compression scheme are depicted in Fig 1. The preprocessing
stages ensure the quality and efficiency of the actual compression. In the segmentation
stage, the image is decomposed into the foreground and background layers using a k-means
clustering algorithm, specifically 2-means clustering, as there are two groups of pixels that
need to be identified: those pertaining to the foreground and those pertaining to the
background. In this case, clusters represent groups of pixel values: higher ones for the
background, which is lighter, lower ones for the foreground, which is darker. The algorithm
is applied in YCrCb color space, to better suit JPEG/JPEG2000 algorithms, in which
luminance/luma values are the most important. At first, clusters are composed only of their

1*
corresponding author, Professor PhD Eng., ”Politehnica” University of Bucharest, 060042 Bucharest,
Romania, costin.boiangiu@cs.pub.ro
2
Engineer, ”Politehnica” University of Bucharest, 060042 Bucharest, Romania, luiza.grigoras@cti.pub.ro

219
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

centroids, which are established at Black and White ([0, 0, 0] and respectively [1, 1, 1] - in
normalized RGB space). The clusters are constructed further on by assigning each pixel
color to one of them, based on the Euclidean distance from the cluster's centroid, in the
established colorspace, a distance which has to be smaller than a pre-established threshold.
Afterward, new centroids are calculated as the mean value of all pixels in each cluster and
the process is repeated. The algorithm ends after no notable displacement of the centroids is
observed or after a maximum number of iterations. Although the algorithm is a simple one,
it produces satisfactory results for image decomposition into layers.

Figure 1. Proposed MRC compression scheme

Figure 2. Illustration of the smooth transitions problem. The edge is not sharp, thus some pixels will
be classified as belonging to the foreground (F), and some to the background (B), as shown by the
“Normal Mask” delimiter. By dissolving the mask when classifying pixels for foreground or for
background, the delimiter is moved so that the resulting layer will contain only appropriate pixels.

The mask dissolving step is necessary before using the mask for separating the foreground
and background layers from the original image, in order to counteract the undesired
effects generated by soft transitions of edges from foreground to background, also
described by Zaghetto and De Queiroz (2008) and illustrated in Fig 2. In this case, the
segmentation algorithm cannot clearly classify all edge pixels as pertaining to the
foreground or to the background layer. Therefore, the foreground layer will contain pixels
from the background, which hinder good compression and produce artifacts on the edges
in the resulting image; similar for the background layer. The mask dissolving

220
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

preprocessing step implies enlarging or shrinking the holes in the foreground or


background layer so that no border pixels will be contained.

Figure 3. Pyramid of downsampled and upsampled versions of the original foreground/background


layer. The white pixels indicate holes in the layer.

The proposed method for data-filling implies using a simplified version of the single
image super-resolution technique proposed by Glasner et al. (2009). This super-resolution
scheme is based on resampling the image: downsampling and upsampling back. Fig 3
schematically describes the process. A pyramid of images is created by successively
downsampling the original image until the most basic level is reached, that of pixels
resolution. This is a non-empty pixel, thus a complete information derived from the
existing one in the image. This value is then used to fill in the gaps in the closest mipmap
level. Further on, information is taken from the newly filled in layer and propagated
upward in the same manner, until original image resolution is reached back. At this level,
a new image is obtained, preserving the original existing information, but with no gaps.
As exemplified by Mukherjee et al. (2002), empty pixels are not taken into account when
interpolating. During the upsampling process, only the empty portions of the image have
to be filled with information from higher mipmap levels; the rest of the pixels are copied
from the original levels, in order to preserve the existing information. The final image
layer to be passed to the specific compressor can be a downsampled one; when going
upwards, the algorithm may stop at a higher mipmap level. For RGB images, each color
channel is resampled individually. The downsampling ratio can be defined as:
1 : MipMapPowe r 2MipMapLev el , where MipMapPower represents the actual downsampling
factor, i.e., the ratio between the width or height of adjacent level images.
The compression engine uses JPEG2000 compression for the foreground and background
layers and JBIG2 compression for the mask layer. These algorithms have been chosen
because they are quite recent and have been developed with the specific purpose of
improving (and even replacing) older compression schemes for continuous-tone images
and bi-level images respectively.

221
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

RESULTS AND DISCUSSION

The performances of the proposed MRC codec have been evaluated with regard to
machine-printed document images, some containing graphics and also line art (Fig 4).

Figure 4. Images used for testing: (a) “Book Page 1”, (b) “Book Page 2” , (c) “Minstrels”, (d)
“Wedding”. The age of the scanned book pages and the scanning process may produce unwanted
artifacts (exemplified below the test images) in the final images, making automated text
recognition harder: the texture of paper, stains, fading of letter edges or other contours,
background noise.

The case studies presented in the following subsections were conducted in order to
determine the best parameters for the data-filling stage, which influence the entire MRC
compression process. These parameters are the resampling filters, first of all, followed by
the mipmap power and downsampling ratios for the foreground and background layers.
The first case study performs a strictly quantitative, objective analysis, determining the
best filters based on the size of the obtained image. The second case study performs a
finer analysis of these best filters, based on qualitative objective measures (PSNR and
OCR confidence), from which several final conclusions have been derived.
Quantitative Measurements
The purpose of the first case study was to determine the best-suited resampling filters for
the proposed MRC compression scheme. For the results to be relevant, JPEG2000 lossless
compression was used and the resulting foreground and background layers were kept at
original image sizes (mipmap level to return is zero). The best filters were considered to
be those which determined the best compression ratio (the smallest final image size). A
part of the results of this test is exemplified in Fig 5.

222
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

For all images and mipmap powers, similar filter performances have been registered. It
can be observed that the polynomial filters perform best, as suggested by their grouping in
the left extremity of the graph. The exceptions from this category are the Cubic Spline
filter with parameter α=-1 and the Box filter, which sometimes showed average
performance, comparable to that of Flat-Top windowed-sinc, or worst performance,
depending on the content of the image. Performances of the Gaussian filter are similar to
those of the polynomials. From the BC family, the Notch filter produces very good
results, with Mitchell, Robidoux and Catmull-Rom following at some distance. At the
other extremity are the windowed-sinc filters. From this category, the sinc windowed by
the Flat-Top function performs best, followed (not closely) by Nuttal, Blackman-Harris
and Blackman-Nuttal.
As emphasized in Section 1, an efficient resampling filter would be one that produces a
smooth layer, property requested by JPEG2000 compression. In the light of this
statement, the results can be easily explained. The most blurring polynomial filters
perform best. The Gaussian filter, the Cubic Splines, and B-Spline, with their excessive
blurring, produce the best results. Even though simple, the Box and Triangle filters
perform quite well. This can be justified by the fact that the Box filter produces large
patches of constant color, which can be well compressed (even though transitions between
these patches may be abrupt) and the Triangle filter produces smooth gradients (Thyssen,
2017). The behavior of the filters from the BC-family can be explained through the fact
that they are designed as the best compromise between the level of detail in the image and
the number of artifacts, diminishing blurring, which, in this case, is advantageous. The
windowed-sinc filters are designed to preserve details in the image, having sharpening
effects, and thus produce poorer results.

Figure 5. Image sizes obtained for all tested filters. (a) “Book Page 1” and (b) “Minstrels”. The
dimensions obtained and the filters are marked on the axis. Filter names are colored as follows: red
- polynomial filters, green - windowed-sinc filters, magenta - exponential filters. A similar
grouping of filters can be observed in both images.

223
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

For the second case study, a number of 14 filters from all categories have been selected, as
follows: Triangle, Hermite, Quadratic, Cubic B-Spline, Mitchell, Catmull-Rom, Notch,
Robidoux, Gaussian, Flat-Top, Parzen, Nuttall, Blackman-Nuttall, Blackman-Harris. The Box
and Robidoux Sharp filters have been omitted, because of their inconsistency between tests.
The cubic splines Cubic_H4_2, Cubic_H4_3, Cubic_H4_4 have also been omitted because
their performances in terms of the size of image and PSNR are almost equivalent to those of
Hermite, and the latter has been preferred over them, having better time results as well.
Qualitative Measurements
The purpose of the second study was to further distinguish between performances of the
filters for which similar compression ratios have been previously obtained. The emphasis
was on the quality of the image: both PSNR and OCR mean text confidence metrics have
been used to evaluate filter performances. The text confidence was obtained using the
Tesseract OCR engine (Smith, 2007).
Several mipmap powers have been tested, their influence on the quality of images being
illustrated in Fig 6. PSNR values did not decrease drastically; good qualities for all
images have been obtained even for high values, such as 7 or 8. It has also been observed
that PSNR values varied inversely proportional to the foreground mipmap power and
directly proportional to the background mipmap power. For time performance reasons,
higher values for mipmap powers have been preferred (5-8) and smaller values (1-2) for
mipmap levels to return.

Figure 6. Foreground outputs for the Notch filter. ”Book Page 1” image, different foreground
mipmap powers: (a) 4, (b) 5, (c) 6, (d) 7. In this case, a power of 5 produces the finest transitions,
while powers of 6 and 7 make the foreground more uniform, but isolate patches of color. These
values do not necessarily produce the same effects for all filters; each filter has specific best-
functioning parameters.

224
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 7. Progressive background smoothing of image ”Minstrels” (detail), Cubic B-Spline filter,
various combinations of parameters (mipmap power foreground, mipmap power background,
mipmap level to return foreground, mipmap level to return background), with corresponding
background layer of entire image (below each one): 1st line - original, 2nd line - 5, 6, 1, 1, 3rd line
- 7, 8, 1, 1, 4th line - 5, 6, 2, 2.

225
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 8. PSNR graphs for images: Top to bottom: “Book Page 1”, “Book Page 2”, “Minstrels”,
“Wedding”. Evolution of filter performances is shown comparatively, over several configurations
of parameters (in ascending order of compression rate).

226
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 9. OCR graphs for images: Top to bottom and left to right: “Book Page 1”, “Book Page 2”,
“Minstrels”. Evolution of filter performances is shown comparatively, over several configurations
of parameters (in ascending order of compression rate). These were chosen as a compromise
between all filters, producing good smoothing of foreground and background and ensuring high
compression rates and smaller computation time.

PSNR tests results are quite similar for all images and all combinations of mipmap
powers. The graphs of these results are shown in Fig 8. The top performers were
(constantly) Quadratic, Notch, Cubic B-Spline, Hermite, and Triangle. Mitchell,
Robidoux, Catmull-Rom, Gaussian and Flat-Top performances vary especially with the
types of images. In some cases ("Book Page 1", "Wedding"), Mitchell comes to the front
as the best and Robidoux places itself quite high as well. The Flat Top window has
constant performances and distinguishes itself as the best (in most cases) from its family
of windowed-sinc filters also in terms of PSNR, placing itself well among the
polynomials. For severe compressions of images "Minstrels" and "Wedding", the
windowed-sinc filters (especially Blackman-Nuttall) have comparable performances with
the others and even outperform the Notch filter and some of the polynomials. This
phenomenon is less obvious in the case of "Book Page 1" and "Book Page 2" images
(Blackman-Harris performs best).
OCR confidence tests results are depicted in Fig 9. For "Wedding" and "Minstrels"
images, the selected windowed-sinc filters (the Flat Top filter can be outlined again) and
BC-family filters (excepting Notch) have usually registered best results. For images with
many more lines of text ("Book Page 1" and "Book Page 2"), OCR relative performance
was rather constant for each filter. OCR confidence values improved with the smoothing
of the background (Fig 7), i.e., the increase of mipmap powers and mipmap levels to
return. For "Book Page 1", Gaussian, Notch and Cubic B-Spline stood out, followed by
Quadratic and Robidoux. For "Book Page 2", Triangle, Hermite, Quadratic, Cubic B-
Spline, Gaussian and Notch filters were constantly the best. Also, for all images, the OCR

227
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

confidences become uniform with high parameter values (high compression rate), all
filters registering the same values approximately.

CONCLUSIONS

In this part we presented an MRC-based codec which uses a compression scheme based
on a simple super-resolution idea for the data-filling of sparse foreground and background
layers and JPEG2000 compression for the same layers. Emphasis was placed on the data-
filling stage, of great importance on preparing layers for actual JPEG2000 compression.
The performances of the codec have been evaluated in terms of size and quality of the
final image and time of compression, by varying the parameters which influence the data-
filling stage: the resampling filters, the mipmap powers and the mipmap levels to return.
This concludes the work described in Paşca (2013).
Mip-map powers and mipmap levels to return have great influence on the size and quality
of the final image. An equilibrium between the values of these parameters is preferable.
We recommend medium compression rates - tested combinations of 5, 6, 1, 2 or 7, 8, 1, 2
(mipmap power foreground, mipmap power background, mipmap level to return
foreground, mipmap level to return background). Higher values for foreground powers (8-
9) and lower values for background powers (4-5) can also be used; the same filter
characteristics will remain valid. We also recommend that the background layer should be
at least as heavily compressed as the foreground (by using higher values for mipmap
powers and the levels to return), as this way a smoother, clearer background is obtained
for the text which becomes easier to read and to be processed by an OCR. Noise and other
artifacts from the original paper or caused by the scanning process are eliminated. This is
proven by the increased values of PSNR for high background mipmap powers and higher
OCR confidence for increased levels to return.
Filter performances vary greatly with the type of content the image has and with the
aforementioned parameters. Their performances reflect theoretical predictions and several
of them can be recommended for usage. Some filters registered constant performances in
terms of both image size and quality in most cases, while others sometimes performed the
best and sometimes average or even the worst. Generally, for MRC compression using
JPEG2000, polynomial filters are recommended, because they produce the most
smoothing of the layers. The Gaussian filter produces similar blurring, but with high cost
of computation time and sometimes poor PSNR and OCR results.
Constant and good performances for all images have been registered by Hermite,
Quadratic, Cubic B-Spline and even Triangle, along with Notch and Flat-Top. These
filters represent a good compromise between size, computation time and quality of the
image. The rest of the BC-family and the windowed-sinc filters, along with the Gaussian
have shown fluctuations in performances, sometimes severe.
For small to medium compression rates, Triangle and Hermite filters registered good
results on all types of images. With the increase of compression rate, the BC-family and
the windowed-sinc filters advanced more on the PSNR scale. For medium to high
compression rates, BC-family filters distinguished themselves, namely Mitchell (graphics
and line art images), Catmull-Rom (graphics and line art images) and Robidoux (text

228
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

images); for high compression rates, windowed-sinc filters obtained the best
performances, especially Blackman-Harris and Blackman-Nuttall.
The compression scheme can be further improved by implementing more suggestions
from the MRC recommendation (ITU-T Recommendation T.44, 2005), such as cropping
the layer so that only the actually useful part is subdued to compression and using MRC
header options to specify offsets and regions of constant color. The segmentation
algorithm could be modified so that the image is split into more than two layers: k-means
clustering with k equal to 3 or 4, determining k based on certain heuristics. Also, a
mathematical study and filter design matching the characteristics of the resampling filter
to those of the wavelet filter used in JPEG2000 compression might prove useful.
Future work will also be conducted on three main directions in order to improve the
multi-layer separation technique, without the usage of the k-means clustering, by
encompassing modern approaches like model-based fitting background information
(Minaee and Wang, 2015), sparse-smooth decomposition (Minaee et al., 2015), and
morphological-based methods (Mtimet and Amiri, 2013). This will ensure that, alongside
with the choosing of the most appropriate filter function and automatic tuning of
parameters involved in the plane-filling method, the MRC codec will also benefit from
selecting the most suitable plane separation technique, in order to achieve even better
performance.

REFERENCES

[1] Bottou, L., Haffner, P., Howard, P.G., Simard, P., Bengio, Y., LeCun, Y. (1998).
High-Quality Document Image Compression with DjVu. J. Electron. Imaging, 7,
pp. 410-425.
[2] De Queiroz, R.L., Buckeley, R., Xu, M. (1999). Mixed Raster Content (MRC)
Model for Compound Image Compression. In Proceedings of SPIE Visual
Communications and Image Processing, volume (3653), pp. 1106-1117.
[3] De Queiroz, R.L. (2000). On Data Filling Algorithms for MRC Layers. In
Proceedings of the IEEE International Conference on Image Processing, volume
(2), pp. 586-589, Vancouver, Canada.
[4] De Queiroz, R.L. (2005). Compressing Compound Documents. In Barni, M. (ed.),
The Document and Image Compression Handbook. Marcel-Dekker.
[5] De Queiroz, R.L. (2006). Pre-Processing for MRC Layers of Scanned Images. In
Proceedings of the 13th IEEE International Conference on Image Processing, pp.
3093-3096, Atlanta, GA, U.S.A.
[6] Glasner, D., Bagon, S., Irani, M. (2009). Super-Resolution from a Single Image. In
Proceedings of the 12th IEEE International Conference on Computer Vision, pp.
349-356, Kyoto, Japan.
[7] Paşca L. (2013), Hybrid Compression Using Mixed Raster Content, Smart
Resampling Filters and Super Resolution, Diploma Thesis, unpublished work
(original author name Paşca L., actual name Grigoraş L.).

229
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[8] Haneda, E., Bouman, C.A. (2011). Text Segmentation for MRC Document
Compression. IEEE Trans. Image Process, pp. 1611-1626.
[9] Harris, F.J. (1978). On the Use of Windows for Harmonic Analysis with the
Discrete Fourier Transform. Proc. IEEE, 66, pp. 55-83.
[10] Hauser, H., Groller, E., Theussl, T. (2000). Mastering Windows: Improving
Reconstruction. In Proceedings of the IEEE Symposium on Volume Visualization,
pp. 101-108. Salt Lake City, UT, U.S.A.
[11] ITU-T Recommendation T.44 Mixed Raster Content (MRC), (2005).
[12] Lakhani, G., Subedi, R. (2006). Optimal Filling of FG/BG Layers of Compound
Document Images. In Proceedings of the 13th IEEE International Conference on
Image Processing, pp. 2273-2276, Atlanta, GA, U.S.A.
[13] Lehmann, T.M., Gönner, C., Spitzer, K. (1999). Survey: Interpolation Methods in
Medical Image Processing. IEEE Trans. Med. Imaging, 18, pp. 1049-1075.
[14] Minaee, S., Abdolrashidi, A., Wang, Y. (2015). Screen content image segmentation
using sparse-smooth decomposition. 49th Asilomar Conference on Signals,
Systems, and Computers, Pacific Grove, CA, pp. 1202-1206. DOI: 10.1109/
ACSSC.2015.7421331.
[15] Minaee, S., Wang, Y. (2015). Screen content image segmentation using least
absolute deviation fitting. 2015 IEEE International Conference on Image
Processing (ICIP), Quebec City, QC, pp. 3295-3299. DOI: 10.1109/
ICIP.2015.7351413.
[16] Mitchell, D.P., Netravali, A.N. (1988). Reconstruction Filters in Computer
Graphics. ACM SIGGRAPH Comput. Graph, 22, pp. 221-228.
[17] Mtimet, J., Amiri, H. (2013). A layer-based segmentation method for compound
images. 10th International Multi-Conferences on Systems, Signals & Devices 2013
(SSD13), Hammamet, pp. 1-5. DOI: 10.1109/ SSD.2013.6564005.
[18] Mukherjee, D., Chrysafis, C., Said, A. (2002). JPEG2000-Matched MRC
Compression of Compound Documents. Proceedings of the IEEE International
Conference on Image Processing, volume (3), pp. 73-76.
[19] Parker, J.A., Kenyon, R.V., Troxel, D.E. (1983). Comparison of Interpolating
Methods for Image Resampling. IEEE Trans. Med. Imaging, 2, pp. 31-39.
[20] Pavlidis, G. (2017). Mixed Raster Content, Segmentation, Compression,
Transmission. Signals and Communication Technology Series, Springer Singapore,
DOI: 10.1007/ 978-981-10-2830-4.
[21] Rabbani, M., Joshi, R. (2002). An overview of the JPEG 2000 still image
compression standard. Signal Process. Image Commun, 17, pp. 3-48.
[22] Smith, S.W. (1997). The Scientist and Engineer’s Guide to Digital Signal
Processing, 1st ed., pp. 285-296. California Technical Publishing, San Diego, CA,
U.S.A.

230
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[23] Smith, R. (2007). An Overview of the Tesseract OCR Engine. In Proceedings of the
9th International Conference on Doc Analysis and Recognition, volume (2), pp.
629-633, Curitiba, Parana, Brasil.
[24] Thévenaz, P., Blu, T., Unser, M. (2000). Image Interpolation and Resampling. In
Handbook of Medical Imaging. Processing and Analysis; Academic Press series in
biomedical engineering, pp. 393-420. Academic Press, San Diego, CA, U.S.A.
[25] Thyssen, A. (2017). ImageMagick v6 Examples - Resampling Filters. http://
www.imagemagick.org/ Usage/ filter (Accessed: January 25, 2017).
[26] Turkowski, K. (1990). Filters for Common Resampling Tasks. In Glassner, A.S.
(ed.), Graphics Gems, pp. 147-165. Academic Press, San Diego, CA, U.S.A.
[27] Unser, M. (1999). Splines: A Perfect Fit for Signal and Image Processing. IEEE
Signal Process. Mag., 16, pp. 22-38.
[28] WOLFRAM (2017). Filter-Design Window Functions. https://
reference.wolfram.com/ language/ guide/ WindowFunctions.html. (Accessed:
January 25, 2017).
[29] Zaghetto, A., De Queiroz, R., L. (2007). MRC Compression of Compound
Documents Using H.264/AVC-I. Simpósio Brasileiro de Telecomunicações, Recife,
Brasil.
[30] Zaghetto, A., De Queiroz, R.L. (2008). Iterative Pre- and Post-Processing for MRC
Layers of Scanned Documents. In Proceedings of the 15th IEEE International
Conference on Image Processing, pp. 1009-1012, San Diego, CA, U.S.A.
[31] Boiangiu, C.A., Grigoraş L. (2017). “MRC – The Theory of Layer-Based
Document Image Compression”, The Proceedings of Journal ISOM, Vol. 11 No. 1
(Journal of Information Systems, Operations Management).

231
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

SEASONAL CONCENTRATION OF TOURISM IN CROATIA

Goran Ćorluka 1*
Ana Vukušić 2

ABSTRACT

The majority of Mediterranean countries is suffering from seasonality. The seasonal


pattern is most expressed in destinations famous for leisure tourism. Croatia is one
established example of sun-and-sea destinations. Tourist activities are increasing,
whereby tourism is growing but not developing, resulting with seasonality of business.
The paper makes evaluations based on secondary data acquired from statistical
publications of The Croatian Ministry of Tourism. Using methodological approaches:
Seasonality ration, Lorenz curve and Gini coefficient seasonal concentration is measured.
Extreme seasonal concentration in tourist arrivals and overnights is identified.
Economical, employment, ecological and socio-cultural implications arising from tourism
seasonality are elaborated and proposals for future activities to mitigate seasonal
concentration of tourism are provided. The paper contributes to the seasonality literature
by applying different measurement methods in a holist way and by detailed elaboration of
implications arising from seasonal concentration of tourist activities.

KEYWORDS: Seasonal concentration, tourism, Croatia, seasonality implications

INTRODUCTION

Tourism is one of the leading and fastest growing industries in the world (Volvo, 2010).
The importance of tourism in the world economy is highlighted in the World Travel &
Tourism Council report (WTTC 2017). According to the annual research in 2016 total
contribution of Travel and Tourism to GDP was 10.2% of total GDP, the total
contribution of Travel and Tourism to employment was 9,6%, visitor exports generated
USD1,401.5bn (6,6% of total exports), while the contribution of Travel and tourism to
total investment was 4,4%. Tourism is seen as an economic generator, especially in less-
developed countries. Economies are looking for an economic breakthrough through
tourism. The main achievement is the increase in tourist arrivals and overnights, whereby
the tourism industry is growing under uncontrolled conditions. Nowadays we have
examples of countries experiencing a rapid growth of tourist activities and dominantly
relying on tourism as economic activity, experiencing growth rather than development.
Strategic planning is missing and the focus is on tourism expansion instead on tourism
development. The unconstrained increase in tourist activities is resulting with spatial and

1
* corresponding author, PhD, Lecturer, Head of Business Trade Unit, University of Split, Croatia,
Department of Professional Studies, gcorluka@oss.unist.hr
2
Professional assistant, University of Split, Croatia, Department of Professional Studies,
ana.vukusic01@gmail.com

232
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

temporal overuse of tourist facilities. Destinations end up suffering from overutilization in


one part of the year and underutilization in the rest of year. Synonymous for such tourism
development is given in many destinations on the Mediterranean, one example is Croatia.
Croatia is a well-established tourist destination on the Mediterranean famous for beach
tourism. The last two decades tourism became Croatian leading industry. With the
intention to highlight the increase of tourist activities on the territory of Croatia the
comparison of tourist arrivals 2016 to 2005 and 1995 is provided. Number of tourist
arrivals in 2016 was 15.594,157 comparing to 9.995,000 in 2005 and 2.438,000 in 1995,
which is an increase of 56% to 2005 539,65% to 1995 (Ministry of Tourism, Republic of
Croatia, 2015). In the new decade average increase in the annual rate of change in arrivals
is 6,66% and 5,59% in overnights (Ministry of Tourism, Republic of Croatia, 2010-2016),
tourism revenues are also increasing with an average annual rate of change in the
observed period of 4,79% (Croatian National bank, HNB 2010-2016). Without doubt
Croatia is expanding in tourism. Tourism is continuously gaining on importance in the
overall national economic situation. Tourism is a great contributor to the GDP (share of
18,2%), exports (share of 35,1% of visitor exports in total exports), employment (direct
tourism employment share of 6,6% in total employment) and investment (0,92% share of
investments in tourism sector in GDP (Croatian National bank, HNB 2015). The growth
of Croatian tourism is extensive and uncontrolled, with an increased dependence of the
national economy on tourism. Multiplying tourism effects is the set goal, but
sustainability is missing. Today Croatian tourism is internationally established as a sun-
and-sea destination. Tourist activities are concentrated on the coastline within the seven
coastal counties during the summer months.
Table 1. Coastal counties share in total tourist arrivals

Year 2010 2011 2012 2013 2014 2015 2016


Share of coastal arrivals in total
88,74 88,85 88,66 87,90 87,54 87,21 87,12
arrivals (%)
Source: Author´s calculations based on data obtained from Croatian Bureau of statistics (CBS)
and “Tourism in figures”, Editions 2010.- 2016., Ministry of Tourism, Republic of Croatia

Table 2. Coastal counties share in total tourist overnights

Year 2010 2011 2012 2013 2014 2015 2016


Share of coastal arrivals in
total overnights (%)
96,07 96,12 96,11 95,79 95,57 95,38 95,25
Source: Author´s calculations based on data obtained from Croatian Bureau of statistics (CBS)
and “Tourism in figures”, Editions 2010.- 2016., Ministry of Tourism, Republic of Croatia

According to Table 1 Croatia is facing a high degree of spatial concentration in tourist


arrivals as the share in total arrivals is up to 88,74% in the observed period. Despite the
slight decrease from 88,74% in 2010 to 87,12% the concentration is alarming. The spatial
concentration in tourist arrivals is outperformed in overnights reaching 96,12% in the
observed period. The share of 95,25% tourist overnights in the coastal area in 2016 is
supporting the perception of Croatia as a summer sun and beach tourist destination.
Croatia is beside the spatial concentration facing temporal concentration presented in the

233
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

remainder of the paper. The aim of this paper is to, by comparing methodological
approaches, identify the degree of seasonal concentration in Croatian tourism and to
highlight main implications deriving from extreme temporal concentration of tourism.

METHODOLOGY

Data on arrivals and overnights, for the observed period of six years 2010-2016, was
obtained from Croatian Bureau of statistics (CBS) and “Tourism in figures”, Editions
2010.- 2016., Ministry of Tourism, Republic of Croatia. In order to calculate seasonal
concentration in tourist arrivals and overnights a combination of measurement methods
including Seasonality ratio, Lorenz curve and Gini coefficient were applied to measure
the degree of seasonality and compare the degree of seasonality between years. The
Seasonality ratio is calculated by taking the highest number of visitors and dividing these
by the average number visitors (Yacoumis, 1980). The Seasonality ratio is increasing with
the increase of the degree of seasonal concentration, ranging from 1 to 12. If the number
of visitors is constant over the year, 12 months, the Seasonality ratio will be 1, in case the
number of visitors is concentrated in one month, the Seasonality ratio will be 12. The
Lorenz curve is a graphical illustration of inequality. The Lorenz curve, line of inequality,
is calculated by dividing the monthly numbers of tourist arrivals/overnights with the total
number of tourist arrivals/overnights within a given year, hence the monthly ratios have
been calculated. Further, the monthly ratios have been ranked from low value to high
value and cumulative values of ratios of calculated. The gap between the Lorenz curve
and the line of equality is the inequality gap. A higher slope points out higher seasonal
concentration. The Gini coefficient is the most commonly used measure of inequality
representing the area between the Lorenz curve and the line of equality (Lundtrop, 2001).
The Gini coefficient is ranging from 0 to 1, whereby 0 indicates perfect equality and the
value of 1 indicates full unequal distribution of tourist arrivals by months (Kalamustafa
and Ulma, 2010). The formula used in the calculation of the Gini coefficient is explained
by Lundtrop (2001) G= n = ratio value, xi = ratio order, yi = cumulative
actual ratios in the Lorenz curve. With the increase of Gini coefficient increases the level
of seasonal concentration.

RESULTS

Analysis of seasonal concentration in Croatian tourism started with the application of


Seasonality ratio on collected data (Table 3).

234
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 3. Seasonality ratio 2010.-2016.


2010 2011 2012 2013 2014 2015 2016
total total total total total total total
arrivals nights arrivals nights arrivals nights arrivals nights arrivals nights arrivals nights arrivals nights
1 January 107108 303264 108444 275138 119765 310462 107754 277420 124105 311790 143193 362383 155197 394353
2 February 125783 292959 131720 294759 117325 284543 135206 311455 143180 319693 164234 368780 193690 425593
3 March 212840 542570 202780 466786 232981 560273 261242 681147 241823 555711 287506 677798 346753 847986
4 April 470355 1324473 573754 1656647 575333 1632032 548736 1465288 642150 1737616 662418 1749515 653634 1676660
5 May 847641 2979672 792199 2615880 923520 3247770 1037489 3819611 1039816 3319560 1193491 3980684 1218276 4270234
6 June 1325814 6425037 1597348 7794268 1594451 7597318 1667168 7726889 1832442 8708442 1907030 8742000 1909354 8960910
7 July 2747894 17353975 2889885 17810473 2882654 18456016 2915868 18791963 2945798 18401984 3328448 20373298 3914067 22852480
8 August 2856101 19002424 2990657 20233298 3051943 20696087 3346678 21376907 3613735 22499225 3868922 23732640 3985686 25473938
9 September 1122874 6059794 1319949 6852477 1452988 7464083 1454024 7683276 1467872 7840157 1640773 8669315 1878168 9666033
10 October 453963 1372735 501834 1547446 536148 1681016 588361 1798675 658857 1876270 677904 1963488 767087 2275386
11 November 185816 429971 191491 456282 195203 464047 208946 508077 224472 488482 240194 511624 286228 612032
12 December 147927 329505 155616 350821 152849 349816 170004 387106 194166 425018 229210 473790 286017 594247
Average 883676,3 4701365 954639,8 5029523 986263,3 5228622 1036790 5402318 1094035 5540329 1195277 5967110 1299513 6504154
Seasonal
ity 3,232067 4,041895 3,13276 4,022906 3,09445 3,95823 3,227924 3,956988 3,303127 4,060991 3,236842 3,977242 3,067061 3,916564
ratio
Aug/Jan 26,66562 62,65968 27,57789 73,53873 25,48276 66,66222 31,0585 77,05611 29,11837 72,16147 27,01893 65,49049 25,68146 64,59679

Source: Author´s calculations based on data obtained from Croatian Bureau of statistics (CBS)
and “Tourism in figures”, Editions 2010.- 2016., Ministry of Tourism, Republic of Croatia

According to the calculated Seasonality ratio Croatia is experiencing extreme seasonal


concentration of tourist arrivals and overnights. The ratio indicates a higher concentration
in tourist overnights than tourist arrivals, which is attributed to the longer period of stay in
high season per arrival and correspondingly a higher share of August overnights
compared to August arrivals. The Seasonality ratio is emphasizing an uneven distribution
in tourist activities as can be seen in 2016 August had 3,07 times more arrivals than the
annual average and 3,92 times more overnights than the annual average. The unevenly
distribution of tourist activities is presented by the August/January ration. Compering the
month with the highest tourist arrivals/overnights, August, and the lowest tourist
arrivals/overnights, January, enormous uneven proportions are identified. August
achieved 25,68 times more arrivals than January and 64,59 times more overnights than
January. Within the observed period, despite some slight changes, the seasonal ratio is
constant in arrivals and overnights (Figure 1).

235
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. Diagram of seasonality ratio for tourist arrivals and overnights 2010.-2016.
Source: Author´s calculations based on data obtained from Croatian Bureau of statistics (CBS)
and “Tourism in figures”, Editions 2010.- 2016., Ministry of Tourism, Republic of Croatia

The analysis continued with the calculation of the Lorenz curve. The Lorenz curve
(Figure 2) shows the distribution of tourist arrivals against the months of year. The
unequal distribution of tourist arrivals has yielded the curve. The share of tourist arrivals
in the best performing quarter of year outstrips arrivals in the rest of year. Due to the
intense concentration of tourist arrivals in the top performing months of year the gap
between the line of equality and the Loren curve is high, pointing out uneven distribution
and seasonal concentration. The Lorenz curve for tourist overnights (Figure 3) is more
yield, compering to tourist arrivals, indicating an even higher concentration over tourist
overnights in the high tourist season. The shape of Lorenz curve within the observed
period is consisted, pointing out stability of seasonal concentration of tourist activities.

236
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 2. Lorenz curve for tourist arrivals Figure 3. Lorenz curve for tourist overnights
2010.-2016. 2010.-2016.
Source: Author´s calculations based on data Source: Author´s calculations based on data
obtained from Croatian Bureau of statistics obtained from Croatian Bureau of statistics
(CBS) and “Tourism in figures”, Editions 2010.- (CBS) and “Tourism in figures”, Editions 2010.-
2016., Ministry of Tourism, Republic of Croatia 2016., Ministry of Tourism, Republic of Croatia

The last applied measurement method is the Gini coefficient. As a support to the Lorenz
curve, Gini coefficient is using Lorenz curve data to present the inequality of tourist
activities. As given in Figure 4 Gini coefficient for tourist arrivals in the observed period
is very high reaching a maximum of 0,68 in 2011, average value during observed period
is 0,55. Since the maximum value of Gini coefficient is 1 it can be stated that extreme
seasonal concentration is experienced. Tourist overnights have the same intense and are
even higher in value compared to arrivals, average value during observed period is 0,64.
As did the previous measurement methods also does the Gini coefficient indicate
constancy over years.

Figure 4. Gini coefficient of tourist arrivals and overnights


Source: Author´s calculations based on data obtained from Croatian Bureau of statistics (CBS)
and “Tourism in figures”, Editions 2010.- 2016., Ministry of Tourism, Republic of Croatia

237
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Used methodological approaches identified extreme seasonal concentration in tourist


arrivals and overnights. The Seasonality ration points out a disproportionate proportion of
tourist arrivals and overnights in the best performing month, August, and the annual
average, showing high concentration of tourist activities in one month of year. The Lorenz
curve together with the Gini coefficient demonstrate the uneven distribution of tourist
arrivals and overnights within the year, showing intensive concentration of tourist
activities during the main tourist season with a great share in overall annual tourist
activities.

IMPLICATIONS OF SEASONAL CONCENTRATION

Implications of seasonal concentration are resulting due to the overuse of capacities and
resources in one part of the year, and underuse of capacities and recourses in the other
part of year. Literature is classifying implications arising from tourism in four major
categories: economical, employment, ecological and socio-cultural implications.
Economical and employment implications arise form underutilization in off season, while
ecological and socio-cultural arise from overutilization in the high season.
The economic implications are mainly related to the off-season period (McEnnif, 1992).
Economic implications occur due to excessive use of resources in the high season and
underuse of resources in low season (Butler, 1994). The problem of seasonal business
operation is particularly pronounced in the tourism industry because tourism product bears
the characteristics of intangibility and impermanence, so products do not have the
possibility of storing or redistribution, which means if the product or service is not sold on
the day its value is zero (Cooper et al, 2005; Goeldner and Ritchie, 2003; Commons and
Page, 2001). In the accommodation sector, the negative implications of seasonal
fluctuations in demand are leading to a lack of accommodation in the high season, while
underuse of accommodation facilities in the off-season period can have disastrous economic
effects (Koenig and Bischoff, 2005). Seasonality causes loss of profits due to the inefficient
use of resources and the constant fear of insufficient use of capacities (Butler, 1994;
Sutcliffe and Sinclair 1980). Companies and society should achieve a sufficient level of
income in a few hectic weeks of summer in order to ensure coverage of annual fixed costs
and success for the full year (Goulding, Baum and Morrison, 2004). From the aforesaid
arise low return on capital invested what is the main obstacle to the entry of new capital
from private investors and lenders in the tourist sector (Cooper et al., 2005; Goulding,
Bauman and Morrison, 2004; Commons and Page, 2001; Butler, 1994). Acute seasonal
concentration of tourism in Croatia is one of the causes for not having branded international
investment in tourism sector. Furthermore, excessive utilization is resulting with price
increase during the tourist peaks (Cellini and Rizzo, 2010; Commons and Page, 2001). In an
effort to emphasizing the existence of positive economic implications of seasonality name
maintenance on buildings or sites, that are usually scheduled for off-season, supporting the
construction and related economic activities, can be named.
The phenomenon of seasonality in the tourist industry has a dramatic impact on
employment, causing high employment in peak season and reduced employment in off-
season. Seasonal employment affects the economy, employees and local communities,
and therefore is separately considered from the other impacts of seasonality. Seasonality
and employment in the tourism industry is a well-researched topic in the academic

238
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

literature, but there is still a general lack of theoretical knowledge (Krakover, 2000). The
biggest problem of seasonal employment occurs in the full time recruitment and retention
of employees in destinations marked with seasonality of business (Yacoumis, 1980).
Instability of employment conditioned by seasonal demand for labor caused a significant
decline in the rate of employees in the off-season period, where the workforce is forced to
leave the tourist destination in search of permanent employment, which has negative
economic impacts on the destination (Szívás et al. 2003). The workplace in tourism is
usually considered as temporary jobs with low wages and unpopular working hours,
wherein seasonality makes this kind of work more unstable, so employees are exhausted
in the high season and they are forced to seek alternative sources of income in off season
(Kolomiets, 2010). Murphy (1985) points out that the ratio of staff and skills are minimal,
since less training is provided for the temporary employees. Therefore, it is particularly
difficult to maintain the standards and quality (Baum, 1999). Seasonal work is usually
seen as "less significant" and tends to attract less educated, semi-skilled or unskilled staff.
Seasonality in employment is not always necessarily negative, positive effects are seen in
the employment of students and housewives who are able to be employed only during
certain periods of the year (Koenig and Bischoff, 2005). Seasonal tourism companies are
faced with a number of challenges, contrary to companies that operate continuously
throughout the year, as they require productive and trained staff but also seasonal and
temporary employed. Recruitment and hiring an adequate number of seasonal employees
cause financial cost on training and is a challenge for the human resources department
(Cooper et al. 2005). What makes this issue even more challenging is that such companies
must maintain effective and professional staff and at the same time rely on those less
experienced and less skilled workers. Seasonal workers have less time to adapt to the
working environment but still have to give their best in the peak season. What would be
desirable for these companies is to try to restore the same, already trained workers year
after year and thus reduce training costs, increase the quality of services and thus increase
the usefulness and satisfaction of the consumer. Croatian tourism is over the last few
years facing enormous problems regarding seasonal employment. Despite government
activities the problem couldn’t be overcome and even escalated even more. This challenge
is going to increasingly affect the tourism economy in Croatia. On the other hand,
workers benefit because they return to the same job, season after season, and are already
familiar with the environment and the workplace and therefore stress is minimized.
Consequently, the return of seasonal workers in the same workplace is mutually beneficial
for both the employee and the employer (Kolomiets, 2010).
Hartmann (1986) highlights that it would be wrong to evaluate tourism seasonality only in
economic conditions, and to separate the regional tourism services system from their
social environment and ecological base. Environmental implications are largely
synonymous with the negative effects arising through the concentration of visitors during
the peak season in the tourist destination. This includes, for example, congestion of rural
areas, disruption of wildlife, waste water production, noise, pollution, depletion of natural
resources, etc. (Chung, 2009; Bender, Schumacher and Stein, 2005). Manning and Powers
(1984) highlights the vulnerability of the ecological carrying capacity of the excessive
concentration of tourist demand in the area. Articulate pressure on the often fragile
environment, caused by overcrowding and over-utilization during the summer, is cited as
the main problem of environmental protection (Butler, 1994), and as one of the causes of

239
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

unsustainable tourism development. Although in long term raises the belief that
seasonality is positive for the environment, according exclusively high use during the
season, and not scattered all year round use of natural resources, providing rest and time
for renewal during the off-season period (Higham and Hinch, 2002; Butler, 1994;
Hartmann, 1986), but under question is the degree of overuse during the peak season
period as it might cause damages which are not renewable. High-rise seasonal
concentration is endangering protected areas in Croatia. Natural carrying capacity level is
on topic, having set limitations of daily visitors in protected areas, as it is the case in
National Park Krka Waterfalls.
Socio-cultural implications of seasonal variations do not only reflect on the local
community but also on the visitors (Koenig and Bischoff, 2005). Studies dealing with this
issue put the focus on the local community. The negative socio-cultural impacts on local
community are crowds on the streets, traffic jams and lack of parking spaces, increasing
population during the summer, waiting for various services, growth in prices of social
services, increasing crime, overloaded infrastructure and so on (Chung, 2009 ; Bischof
and Koenig, 2005; Allcock, 1989; Murphy, 1985). Manning and Powers (1984) highlight
the vulnerability of the social carrying capacity due to the excessive concentration of
tourist demand in the area. The positive socio-cultural impact of tourism seasonality is the
ability of local community, during the off-season, to fully enjoy their environment, and to
have the ability to relax from the stress and strain and to revitalize and renew (Higham
and Hinch, 2002; Butler, 1994; Hartmann, 1986; Murphy, 1985). The high concentration
of tourist activity during peak tourist season has also negative implications on the tourist
demand, which has been neglected by researchers. The satisfaction of tourists may be
reduced due to overcrowded tourist attractions, the lack of tourist facilities, insufficient
quality of services, payment of high prices in the peak season, conversely in the off-
season period numerous facilities are out of order (Young, 2004; Commons Page 2001;
Krakover, 2000; Butler, 1994). Croatia is planning, regarding the socio-cultural carrying
capacity to apply limitation of number of daily visitors in protected cultural heritage
centres, for example Dubrovnik.

DISSCUSION AND CONCLUSION

Most tourist destinations experience seasonal patterns of tourist visitation (Jang 2004).
Spatial and temporal concentration in tourist activities is not a particular characteristic of
a single destination or country, it is experienced in almost all destinations and countries in
the world. Croatia, as a Mediterranean destination, attracting mostly motivated by leisure
sun and sea tourism is experiencing pronounced seasonal concentration of tourist
activities. As presented, seasonal concentration of tourism in Croatia is extreme high.
Croatia is seen as one of the world’s most seasonal affected destinations, having in 2016
August 3,07 times more arrivals than the annual average and 3,92 times more overnights
than the annual average, with a Gini coefficient of 0,52 for tourist arrivals and 0,63 for
tourist overnights. As a result of enormous seasonal concentration implications affecting
the economy, employment, ecology and socio-cultural community are arising. Croatia is
suffering from tourism seasonality and seeking for solutions to combat or mitigate the
seasonal pattern of tourism. Croatian tourism will be challenged to expand the high
season, attract visitor in the off season and to make tourism more sustainable. Activities

240
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

which have to be undertaken include promotion of diversity of Croatian as a tourist


destination, proactive destination management, adaptation of tourist supply to demand
needs in off season period, increasing accessibility of destinations. Croatia has to turn to
new market segments and diversify the destination product. Despite the activities
provided by the Ministry of tourism and the Croatian national tourist board, results are
missing. The public and private sector have to be involved to alleviate the implications
arising from emerging to seasonal concentration and to manage to, at least, extend the
season, as higher goal might be to unrealistic at the moment.

LITERATURE

[1] Allcock, J. B. (1989), Seasonality, In Witt, S. F. and Moutinho, L. (eds), Tourism


Marketing and Management Handbook, London, Prentice Hall, str.387-392.
[2] Baum, T., (1999), Seasonality in tourism: understanding the challenges, Tourism
Economics, Vol. 5 (1), 5-8.
[3] Bender, O., Schumacher, K. P., stein, D. (2005), Measuring Seasonality in Central
Europe’s Tourism – how and for what, CORP & Geomultimedia05, Feb. 22-25, str.
303-309.
[4] Butler, R., (1994), Seasonality in Tourism: Issues and Problems, In: Seaton, A.V.
(ed), Tourism: The state of Art, p. 332-339.
[5] Cellini, R., Rizzo, G., (2010), Private and Public Incentive to Reduce Seasonality: a
Simple Theoretical Model, University of Catania, Faculty of Economics & DEMQ,
Catania, doi: http://dx.doi.org/10.5018/economics-ejournal.ja.2012-43
[6] Chung, J. Y., (2009), Seasonality in Tourism: A Review, e-Review of Tourism
Research (eRTR), Vol. 7, No. 5, str. 82-96.
[7] Commons, J., Page, S., (2001), Managing Seasonality in Peripheral Tourism
Regions: The Case of Northland, New Zealand, u Baum T., Lundtrop, S.,
Seasonality in tourism, Pergamon, Amsterdam, str. 153-172. doi:
http://dx.doi.org/10.1016/B978-0-08-043674-6.50013-1
[8] Cooper, C., Flechter, J., Fyall, A., Gilbert, D., Wanhill, S., (2005), Tourism
Principles and Practice, (3re ed.), Pearson Education
[9] Croatian National bank, (2015), HNB Statistic repot
[10] Goeldner, C. R., Ritchie, J. R. B., (2003), Tourism: Principles, Practice,
Philosophies, (9th ed.), New York, Chichester: Wiley
[11] Goulding, P. J., Baum, T. G., Morrison, A. J., (2004), Seasonal Trading and
Lifestyle Motivation: Experience of Small Tourism Business in Scotland, Journal of
Quality Assurance in Hospitality and Tourism, Vol. 5 (2/3/4), str. 209-238. doi:
http://dx.doi.org/10.1300/J162v05n02_11
[12] Hartmann, R., (1986), Tourism, seasonality and social change, Leisure Studies, Vol.
5, No. 1, str. 25-33.

241
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[13] Higham, J., Hinch, T., (2002), Tourism, sport and season: the challenges and
potential for overcoming seasonality in the sport and tourism sector, Tourism
Management, Vol. 23, str. 175-185. doi: http://dx.doi.org/10.1016/S0261-
5177(01)00046-2
[14] Karamustafa, K., Ulama, S., (2010), Measuring the seasonality in tourism with the
comparison of different methods, EuroMed Journal of Business, Vol. 5, No. 2, 191-
214. doi: http://dx.doi.org/10.1108/14502191011065509
[15] Koenig, N., Bischoff, E. E. (2004), “Analyzing Seasonality in Welsh Room
Occupancy Data”, Annals of Tourism Research, Vol. 31, No. 2, pp. 374-392. doi:
http://dx.doi.org/10.1016/j.ijhm.2012.12.002
[16] Kolomiets, A., (2010), Seasonality in Tourism Employment Case: Grecotel Kos
Imperial, Kos, Greece, Saima University of Applied Sciences Tourism and
Hospitality, Imatra Degree Programme in Tourism Bachelor of Hospitality
Management, Imatra
[17] Krakover, S., (2000), Partitioning Seasonal Employment in the Hospitality Industry,
Tourism Management, 21, 461-471. doi: http://dx.doi.org/10.1016/S0261-
5177(99)00101-6
[18] Lundtrop, S., (2001), Measuring tourism seasonality, In Seasonality in tourism,
Baum, T. and Lundtrop, S. str. 23-50, Oxford: Pergamon
[19] Manning, R. E., Powers, L. A., (1984), Peak and off-peak use: Redistributing the
outdoor recreation/tourism load, Journal of Travel Research, Vol. 23, str. 25-31.
[20] McEnnif, J. (1992), Seasonality of Tourism Demand in the European Community,
Travel and Tourism Analyst, Vol. 3, str. 67–88.
[21] Ministry of Tourism, Republic of Croatia, Tourism in figures, Editions 2010.- 2016.
[22] Murphy, P. E. (1985), Tourism: a community approach, Methuen, New York
[23] Sutcliffe, C., Sinclair, M. (1980), The Measurement of Seasonality within the
Tourist Industry: An Application to Tourist Arrivals in Spain, Applied Economics,
Vol. 12, str. 429–441
[24] Szivas, E., Riley, M., Airey, D. (2003), Labour mobility into tourism: attraction
and satisfaction, Annals of Tourism Research, Vol. 30. No. 1, str. 64-76.
[25] Volo, S., (2010), Seasonality in Sicilian tourism demand, Dipartimento di Metodi
Quantitativi per le Scienze Umane, Universita di Palermo, Italy
[26] World Travel & Tourism Council (2017), Travel and tourism economic impact 2017
[27] Yacoumis, J. (1980), Tackling seasonality: the case of Sri Lanka, International
Journal of Tourism Management, Vol. 1, No. 4, str. 84-98. Doi:
http://dx.doi.org/10.1016/0143-2516(80)90031-6

242
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

APPLICATIONS AND COMPUTER SIMULATIONS OF


MARKOV CHAINS

Sanda Micula 1
Rodica Sobolu 2*

ABSTRACT

In this paper we discuss Markov chains, theoretical results, applications and


algorithms for computer simulations in MATLAB. We describe the use of Monte Carlo
methods for estimating probabilities and other characteristics relating to Markov
chains. The paper concludes with some interesting applications.

KEYWORDS: Markov chains, stochastic processes, computer simulations, Monte Carlo


methods, MATLAB.

AMS Subject Classification: 60E05, 60G99, 60J10, 65C05, 65C60.

1. INTRODUCTION

In probability theory and related fields, a Markov process (named after the Russian
mathematician Andrey Markov), is a stochastic process that satisfies the
“memorylessness” property, meaning that one can make predictions for the future of the
process based solely on its present state, independently from its history. A Markov chain
is a Markov process that has a discrete state space. Markov chains have many
applications as statistical models of real-world problems, such as counting processes,
queuing systems, exchange rates of currencies, storage systems, population growths and
other applications in Bayesian Statistics.
Monte Carlo methods are used to perform many simulations using random numbers and
probability to get an approximation of the answer to a problem which is otherwise too
complicated to solve analytically. Such methods use approximations which rely on “long
run” simulations, based on computer random number generators. Monte Carlo methods
can be used for (but are not restricted to) computation of probabilities, expected values
and other distribution characteristics.

1
Department of Mathematics and Computer Science, Babes-Bolyai University, Cluj-Napoca, Romania,
smicula@math.ubbcluj.ro
2*
corresponding author, Department of Land Measurements and Exact Sciences, University of Agricultural
Sciences and Veterinary Medicine, Cluj-Napoca, Romania, rodica.sobolu@usamvcluj.ro

243
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

1.1. Preliminaries

We recall a few notions from Probability Theory that will be needed later.
Let S be the sample space of some experiment, i.e. the set of all possible outcomes of that
experiment (called elementary events and denoted by ). Let P be a probability mapping
(see [4]).
Definition 1.1. A random variable is a function for which P (X ≤ x) exists,
for all x .
If is at most countable in , then X is a discrete random variable, otherwise,
it is a continuous random variable.
If X is a discrete random variable, then a better way of describing it is to give its
probability distribution function (pdf) or probability mass function (pmf), an array
that contains all its values , and the corresponding probabilities with which each value
is taken,

(1.1)

Of the discrete probability laws, we recall two of the most widely used.
Bernoulli distribution Bern(p), with parameter This is the simplest of
distributions, with pdf

. (1.2)

It is used to model “success/failure” (i.e. a Bernoulli trial), since many distributions are
described in such terms.
Binomial distribution , with parameters . Consider a series of
n Bernoulli trials with probability of success p in every trial ( ). Let X be the
number of successes that occur in the n trials. Then X has a Binomial distribution, with
pdf

(1.3)

Note that a Binomial variable is the sum of n independent variables


and
Let us also recall the notion of conditional probability and related properties.
Definition 1.2. Let A and B be two events with . The conditional probability of
A, given B, is defined as

The next result is known as the total probability rule.

244
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Proposition 1.3. Let be a partition of , i.e. are an


exhaustive collection of events) and ( are mutually exclusive
events). Let E be any event and B any event with . Then

(1.4)

2. STOCHASTIC PROCESSES AND MARKOV CHAINS

Random variables describe random phenomena at a particular moment of time, but


many variables change and evolve in time (think air temperatures, stock prices,
currency rates, CPU usage, etc). Basically, stochastic processes are random variables that
develop and change in time.
Definition 2.1. A stochastic process is a random variable that also depends on time. It
is denoted by or where is time and is an outcome. The values
of are called states.
If is fixed, then is a random variable, whereas if we fix , is a function of
time, called a realization or sample path of the process .
Definition 2.2. A stochastic process is called discrete-state if is a discrete
random variable for all and continuous-state if is a continuous random variable,
for all
Similarly, a stochastic process is said to be discrete-time if the set is discrete and
continuous-time if the set of times is a (possibly unbounded) interval in .
Throughout the paper, we will omit writing e as an argument of a stochastic process (as it
is customary when writing random variables).

2.1. Markov Processes and Markov Chains; Transition Probability Matrix

Definition 2.3. A stochastic process is Markov if for any times


and any sets ,
. (2.1)
What this means is that the conditional distribution of given observations of the
process at several moments in the past, is the same as the one given only the latest
observation.
Definition 2.4. A discrete-state, discrete-time Markov stochastic process is called a
Markov chain.
To simplify the writing, we use the following notations: Since a Markov chain is a discrete-
time process, we can see it as a sequence of random variables where
describes the situation at time It is also a discrete-state process, so we denote the
states by 1, 2, …, n (they may start at 0 or some other value and n may possibly be ).

245
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Then the random variable has the pdf

, (2.2)

where . Since the states (the values of the


random variable ) are the same for each k, one only needs the second row to describe
the pdf. So, let
(2.3)
denote the vector on the second row of (2.2).
The Markov property (2.1) can be now written as
for all (2.4)
We summarize this information in a matrix.
Definition 2.5.
- The conditional probability
(2.5)
is called a transition probability; it is the probability that the Markov chain transitions
from state i to state j, at time t. The matrix
(2.6)

is called the transition probability matrix at time t.


- Similarly, the conditional probability
(2.7)
is called h-step transition probability, i.e. the probability that the Markov chain moves
from state i to state j, in h steps and the matrix

(2.8)

is the h-step transition probability matrix at time t.


Definition 2.6. A Markov chain is homogeneous if all transition probabilities are
independent of time,

Throughout the rest of the paper, we will only refer to homogeneous Markov chains (even
if not specifically stated so).
Proposition 2.7. Let be a Markov chain. Then the following relations hold:
for all

246
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

for all (2.9)


Proof:
The proof of the first relation goes by induction.
Obviously, the first relation in (2.9) is true for h = 1. Assume . For a
matrix M, we use the notation and, similarly, for a vector v, .
Since the events form a partition, using the total probability rule (1.4)
for , we have

for all
so .
To prove the second relation in (2.9), for each we have
Again, using (1.4) for the events , we get

so, by the previous relation proved,


Remark 2.8. We used the fact that state destinations are mutually exclusive and
exhaustive events, thus forming a partition. That is because from each state, a Markov
chain makes a transition to one and only one state. As a consequence, in matrices P and
, the sum of all the probabilities on each row is 1. Such matrices are called
stochastic.

2.2. Steady-State Distribution; Regular Markov Chains

It is sometimes necessary to be able to make long-term forecasts, meaning we want


, so we need to compute .

Definition 2.9. Let X be a Markov chain. The vector consisting of the


limiting probabilities if it exists, is called a steady-state
distribution of X.

247
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In order to find it, let us notice that

Taking the limit as on both sides, we get


(2.10)
Notice that the system (2.10) is an singular linear system (multiplication by a
constant on each side leads to infinitely many solutions). However, since π must also be a
stochastic matrix, the sum of its components must equal 1. We state the following result,
without proof.
Proposition 2.10. The steady-state distribution of a homogeneous Markov chain X,
if it exists, is unique and is the solution of the linear
system

(2.11)

Remark. 2.11.
1. When we need to make predictions after a large number of steps, instead of the lengthy
computation of , it may be easier to try to find the steady-state distribution, π, directly.
2. If a steady-state distribution exists, then also has a limiting matrix, given by

Notice that π and do not depend on the initial state Actually, in the long run the
probabilities of transitioning from any state to a given state are the same,
(all the rows of coincide). Then, it is just a matter of
“reaching” a certain state (from anywhere), rather than “transitioning” to it (from another
state). That should, indeed, depend only on the pattern of changes, i.e. only on the
transition probability matrix.
As stated earlier, a steady-state distribution may not always exist. We will mention
(without proof) one case, which is really easy to check, when such a distribution does
exist.
Definition 2.12. A Markov chain is called regular if there exists h ≥ 0, such that

for all
This is saying that at some step h, has only non-zero entries, meaning that h-step
transitions from any state to any state are possible.
Proposition 2.13. Any regular Markov chain has a steady-state distribution.

248
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Remark 2.14.
1. Regularity of Markov chains does not mean that all should be positive, for all h.
The transition probability matrix P, or some of its powers, may have some 0 entries, but
there must exist some power h, for which has all non-zero entries.
2. If there exists a state i with , then that Markov chain cannot be regular. There is
no exit (no transition possible) from state i. Such a state is called an absorbing state.
3. Another example of a non-regular chain is that of a periodic Markov chain, i.e. one for
which there exists such that for all Obviously, in this case,
does not exist, and neither does a steady-state distribution.

3. COMPUTER SIMULATIONS OF MARKOV CHAINS AND MONTE


CARLO METHODS

Monte Carlo methods are a class of computational algorithms that can be applied to a
vast range of problems, where computation of probabilities and other characteristics of
interest is too complicated, resource or time consuming, or simply not feasible. They are
based on computer simulations involving random number generators and are used to
make predictions about processes involving random variables. A computer code that
replicates a certain phenomenon can be put in a loop, be simulated any number of
times and, based on the outcomes, conclusions about its real life behaviour can then be
drawn. The longer run is simulated, the more accurate the predictions are. Monte Carlo
methods can be used for estimation of probabilities, other distribution characteristics,
lengths, areas, integrals, etc.
Many important characteristics of stochastic processes require lengthy complex
computations. Thus, it is preferable to estimate them by means of Monte Carlo methods.
For Markov chains, to predict its future behaviour, all that is required is the distribution
of , i.e. (the initial situation) and the pattern of change at each step, i.e. the
transition probability matrix P.
Once is generated, it takes some value (according to its pdf ). Then, at the
next step, is a discrete random variable taking the values with
probabilities from row i of the matrix P. Its pdf will be

The next steps are simulated similarly.


Since, at each step, the generation of a discrete random variable is needed, we can use any
algorithm that simulates an arbitrary discrete distribution. Let

be any discrete random variable. We use the following simple algorithm (see [4]).

249
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Algorithm 3.1.

1. Divide the interval [0,1] into the subintervals as follows:

2. Let be a Standard Uniform random variable.


3. If , let
Indeed, then X takes the values and We put Algorithm
3.1 in a loop to generate a Markov chain.

Algorithm 3.2.

1. Given:
sample path size (length of Markov chain),

2. Generate from its pdf


3. Transition: if generate with using Algorithm 3.1.
4. Return to step 3 until a Markov chain of length is generated.

4. APPLICATIONS

Let us consider the following example:


An encrypting program generates sequences of letters, such that a vowel is followed by
a consonant with probability 0.3, while a consonant is followed by a vowel with
probability 0.4.
(1) If the first character is a consonant, make predictions for the second and third
character.
This stochastic process, say X, has two states, 1 =“vowel” and 2 =“consonant”, so it is
discrete-state. The time set consists of the position of each character in the sequence, so X
is also discrete-time. Since the prognosis of each character depends only on the previous
one, it is a Markov process and, hence, a Markov chain. Finally, the probability of
transitioning from a vowel or from a consonant at any position in the sequence, is the
same, hence, X is a homogeneous Markov chain.
The initial situation (first character) is

250
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The transition probability matrix is

= .

For the second character, at , the pdf will be

So, the second character has 40% chance of being a vowel and 60% chance of being a
consonant. For the third character, the pdf is

The third character is a vowel with probability 0.52 and a consonant with probability 0.48.
(2) Suppose now that the first character is a consonant with probability 0.8. What is
the prognosis for the third and the 100th character?
In this case, P is the same, but (i.e. changes.
[0.2 0.8]
and

The 100th character is many steps away, so instead of computing , we find the steady-
state distribution. Notice that P has all nonzero entries, so the Markov chain is regular,
which means a steady-state distribution does exist. We find it by solving the system (2.11),
i.e,

with solution and So, in the “long run ”, the pdf of the
situation is

i.e., about 57% of the characters are vowels and around 43% are consonants.
(3) It was found that if more than 15 vowels or more than 12 consonants are
generated in a row, in a sequence of 100 characters, then the code becomes
vulnerable to cracking. Assuming that the first character is a consonant with
probability 0.8, conduct a Monte Carlo study for estimating the probability of the
code becoming vulnerable.
We use Algorithm 3.2 to generate a sample path of length 100, for a large number of
simulations. The MATLAB code is given below.

251
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

% Simulate Markov chain.


clear all
Nm = input(‘ length of sample path = ‘);
N = input(‘ nr. of simulations = ‘)
For j = 1 : N
p = [0.2 0.8]; % initial distr. of vowels/consonants
P = [0.7 0.3; 0.4 0.6]; %trans. prob. matrix
prob(1, :) = p;
for t = 1 : Nm
U = rand;
X(t)=1*(U<p(1))+2*(U>=p(1));
% simulate X(1), … , X(Nm)as Bernoulli variables
prob(t+1, :) = prob(t, :)*P;
p = P(X(t), :); % prepare the distribution for X(t+1);
% its pdf is the (X(t))th row of matrix P
end

i_change = [find(X(1:end-1)~=X(2:end)), Nm];


% find all indices where X changes states
longstr(1) = 1; % find a vector containing the long streak
% of consecutive vowels/consonants
if (i_change(1) ~=1)
% if X does not change state at step 1, the first long streak
begins at the first change of states
longstr(1) = i_change(1);
end

for i = 2 : length(i_change)
longstr(i) = i_change(i) – i_change(i-1);
% find all streaks
end

if(X(1)==1)
vowel = longstr(1:2:end); % find the long streaks of vowels
conson = longstr(2:2:end); % find the long streaks of the consonants
else
vowel = longstr(2:2:end);
conson = longstr(1:2:end);
end

maxv(j) = max(vowel); % longest streak of vowels


maxc(j) = max(conson); % longest streak of consonants
end

fprintf(‘probability of more than 15 vowels in a row is


%1.4f\n’, mean(maxv>15))
fprintf(‘probability of more than 12 consonants in a row is
%1.4f\n’, mean(maxc>12))

After running this code several times, for a sample path of length 100 and for a number of
104 and 105 simulations, it was found that the probability of having more than 12 vowels

252
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

in a row is approximately 0.07, whereas the chance of getting more than 12 consonants
in a row is around 0.04. Based on these results, the encrypting technique can be properly
adjusted.

REFERENCES

[1] C. Andrieu, A. Doucet, R. Holenstein, Particle Markov chain Monte Carlo


methods, J. Royal. Statist. Soc. B. Vol. 72(3), 2010, 269–342.
[2] M. Baron, Probability and Statistics for Computer Scientists, 2nd Edition, CRC
Press, Taylor & Francis, Boca Raton, FL, USA, 2014.
[3] L. Gurvits, J. Ledoux, Markov property for a function of a Markov chain: A linear
algebra approach, Linear Algebra and its Applications, Vol. 404, 2005, 85–117.
[4] D. V. Khmelev, F. J. Tweedie, Using Markov Chains for Identification of Writers,
Literary and Linguistic Computing, Vol. 16(4), 2001, 299–307.
[5] T. Liu, Application of Markov Chains to Analyse and Predict the Time Series,
Modern Applied Science, Vol. 4(5), 2010, 161–166.
[6] S. Micula, Probability and Statistics for Computational Sciences, Cluj University
Press, 2009.
[7] S. Micula, Statistical Computer Simulations and Monte Carlo Methods, J. of
Information Systems and Operations Management, Vol. 9(2), 2015, 384–394.
[8] J. S. Milton, J. C. Arnold, Introduction to Probability and Statistics: Principles and
Applications for Engineering and the Computing Sciences, 3rd Edition. McGraw-
Hill, New York, 1995.
[9] J. Pan, A. Nagurney, Using Markov chains to model human migration in a
network equilibrium framework, Mathematical and Computer Modelling, Vol.
19(11), 1994, 31–39.
[10] http://www.mathworks.com/help/matlab/, 2017.

253
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

THE DIGITAL UNIVERSITY: INFORMATION SECURITY AND


TRANSPARENCY

Paz San Segundo Manuel 1*

ABSTRACT

The digital university has today emerged from its hitherto cloistered existence and
become an open, transparent and crystal-clear, where information, data, publications,
classes and projects are open to society, and the great functions of teaching, researching,
and training qualified professionals are no longer fundamental decisions that are
overridingly linked to the university sphere. The digital university was forged in the 20th
century, and it has since revolutionised all previously known information systems. It has
also raised some considerable management challenges. One of the basic aspects of this
transformation concerns the data and information owned or generated by the universities
themselves.
The universities, in spite of the budgetary constraints to which they have been subjected –
sometimes due to the crisis–, should not remain on the margins of these changes, but
engage fully in training future professionals and taking the lead in achieving the goals of
the Information Society. There are numerous challenges facing universities in the 21st
century –they will be required to be digital, open, transparent and crystal-clear. These
challenges particularly concern the treatment, transparency, creation and negotiation of
data and information. The goals of the digital university must be approached from the
point of view of respect for individual freedoms and for citizens’ fundamental rights.

KEYWORDS: Digital university, data security, transparency of information,


information society.

I. INTRODUCTION

The digital university has today emerged from its hitherto cloistered existence and
become an open, transparent and crystal-clear, where information, data, publications,
classes and projects are open to society, and the great functions of teaching, researching,
and training qualified professionals are no longer fundamental decisions that are
overridingly linked to the university sphere. The digital university was forged in the 20th
century, and it has since revolutionised all previously known information systems. It has
also raised some considerable management challenges. One of the basic aspects of this
transformation concerns the data and information owned or generated by the universities
themselves.

1*
corresponding author, Head of the Department of Legal Policy for Information Data Protection Officer at
the National Distance Education University (UNED), C/ Bravo Murillo, 38 - 28015 Madrid, Spain,
psansegundo@pas.uned.es

254
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

From a historical point of view, the Napoleonic university model that prevailed in some
european countries until the 20th century was based on an educational system where one
of the primary missions of universities was to train professionals and graduates; that is,
people accredited by a licence to practice their profession. Here it is worth noting the
analysis of Giner de los Ríos, who said “the German university is above all a scientific
institution; the English university is educational; and the Latin one is professional"1.
This university model has been totally displaced by the online university, among other
reasons because universities are not outside the digital market. Quite the reverse –the
University as an institution is closely linked to the sweeping digital revolution and the
creation of the global market that has emerged as a result of this far-reaching change.
Chronologically, this process started on January 1, 1983, when the Arpanet network,
created by the United States Defence Department, was divided into one military network
–called Milnet– and another civil network known as Internet, which was a technology
originally connecting a network of researchers and managers in the field of information
access and exchange. In Europe, the interaction between the digital society and the
University is evident from the very instant the Internet was created. It is significant that
the first e-mail ever recorded in Spain was sent from a university, the Madrid Polytechnic
University, on December 2, 1985; it came specifically from the University's Data
Communications Department in the School of Telecommunications Engineering.
Universities have been on the front line in creating the Information Society. The largest
projects in this social model have all emerged from universities: Yahoo and Google
started in the 1990s as research projects by students at Stanford University; and Facebook,
before becoming a social network for use by the public, began as an Internet-based
communications space for students at Harvard University. In parallel, the creators of these
great projects are now seeking to have their own university. One of the enterprises worth
highlighting is the Singularity University created by Google and NASA, whose mission is
to educate, inspire and train leaders to apply technologies exponentially in order to tackle
some of the major challenges facing Humanity.
All these new developments in the information society, which emerged from the
universities themselves, have in turn transformed the very nature of these same
institutions. Today the new tools are an integral part of the university fabric.

II. SECURITY AND TRANSPARENCY IN THE DIGITAL UNIVERSITY

The digital university holds vast amounts of information and data. Managing it all is one
of the great challenges facing universities in the future, as they must ensure that the
information is of the utmost quality, as well as being transparent, accessible, reusable,
secure, and profitable. One of the most important aspects in the analysis of the online
university is thus the issue of how to manage information and databases. Universities hold
the key to some of a country's most important personal information databases, and to
significant volumes of quality information.

1
GINER DE LOS RÍOS, Francisco. Escritos sobre la Universidad española. Edición de Teresa Rodríguez de
Lecea. Madrid. Espasa Calpe-Colección Austral. 1990. p. 117.

255
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The importance of data in today’s society is analysed by Víctor Mayer-Schönberger, who


notes that “data is to the information society what fuel was to the industrial economy: the
critical resource powering the innovations that people rely on. Without a rich, vibrant
supply of data and a robust market for services, the creativity and productivity that are
possible may be stifled”1.
The importance of data in our society, as underlined by Víctor Mayer, can be seen in the
important business of managing massive amounts of information –or “Big Data”–, which
in some sectors has only just begun.
The European Data Protection Supervisor in its decision of November 19, 2015, states as
follows:
“Big data, if done responsibly, can deliver significant benefits. (...) But there are serious
concerns about the actual and potential impact of processing of huge amounts of data on
the rights and freedoms of individuals, including their right to privacy. The challenges and
risks of big data therefore call for more effective data protection”2.
The concern of the European Data Protection Supervisor about the privacy of data has
been analysed in the past by leading legal minds such as the American Supreme Court
judge William O. Douglas, who decades earlier succinctly declared that "the right to be let
alone is indeed the beginning of all freedom"3.
Universities should be highly sensitive to their databases and the quality information they
contain, including the publications and research in their possession. Today, almost all
universities have a Google Analytics service, a space where statistical data can be
obtained. This service involves the use of data by people outside the universities
themselves. The challenge is now for the universities that actually possess the data and
information to tap into the potential of these products in their own benefit. Data mining is
one of the big businesses of the future. Today, for example, thanks to data analysis it is
possible to recall medication after analysing mass searches of adverse effects, estimate the
map of contagion for a disease, and understand upcoming electoral results.
Another similar aspect worth noting in terms of data management is the permission for
their re-utilisation for commercial purposes by the private sector. The opinion of the
European Union Open Data Portal on this subject is that it has created a “single point of
access to a growing range of data from the institutions and other bodies of the European
Union”. It goes on to say “data are free for you to use and reuse for commercial or non-
commercial purposes. By providing easy and free access to data, the portal aims to
promote their innovative use and unleash their economic potential. It also aims to help
foster the transparency and the accountability of the institutions”4.

1
MAYER-SCHÖNBERGER KENNETH CUKIE, Víktor. Big data: La revolución de los datos masivos.
Turner Publicaciones S.L. 2013. p. 224.
2
Executive Summary of the Opinion of the European Data Protection Supervisor on "Meeting the challenges
of big data: a call for transparency, user control, data protection by design and accountability".
The full text of this opinion can be found in English, French and German on the EDPS website
www.edps.europa.eu , 2016/C 67/05) Official Journal of the European Union 20-02 2016.
3
Public Utilities Commission v. Pollak, 343 U.S. 451, 467 (1952) (William O. Douglas, dissenting opinion).
4
https://open-data.europa.eu/en/data/

256
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

According to a study by the European Commission in 2002, public bodies' data and
information “is a potentially rich raw material for new information products and
services”, and has an economic value which was estimated at that time to be 68.000
trillion euros.
The Spanish legislation on the re-utilisation of public sector information was approved in
2007, and follows the same line as the European Union Open Data Portal. The preface to
this regulation sets forth the re-utilisation of public sector documents for commercial and
non-commercial purposes.
Another largescale European project implemented in Spain was the transparent
management of on information and data management in the public sector. The preface to
Act 19/2013 of December 9 on transparency states that “transparency, access to public
information and standards of good governance should be the cornerstones of all political
action. Only when the actions of the public authorities are submitted to scrutiny, when
citizens can see how the decisions that affect them are made, how public funds are
handled, or the criteria used as the basis for the actions of our institutions, will we see the
start of a process in which public institutions are called to respond to a society that is
critical, demanding and which requires the participation of (sic) the public authorities”1.
This law represents a further significant development in managing information in
universities. Transparency is another of the mantras of the online university. In traditional
universities in the pre-digital age, secrecy served to define their identity and drive their
management, as the best way of ensuring the survival of the institution. Secrecy is one of
the keys to power, which is why a large number of important jobs such as the faculty
secretary, the general secretary and secretaries have this designation, as they are the
depositories of the secrets of their post. The word “secretary” comes from the Latin verb
cernere, meaning to remove or separate. A secret is something isolated, protected. In the
age of transparency, the first to cast off the name of secretary in Spain were the legal
secretaries, who on October 1, 2015, became known by the term of legal administration
lawyers, as ordained in Constitutional Law 7/2015 of July 21, which amends
Constitutional Law 6/1985, of July 1, of the Judiciary2.
Transparency helps ensure control over public activity and dispels what eastern
philosophy would call the "thundering silence”. However, there is a certain doctrinal
sector which holds that behind the facade of transparency lies an attack on the public
sector and on politics in its broadest sense. Contemporary philosophers such as Byung-
Chul Han claim that politics is a strategic action, and therefore requires the existence of a
secret sphere. According to this author, the online market requires things to be exposed
and on display, and sees the value of secrecy –of separation– as negative. If the society of
transparency is linked to politics, it will gradually bring about what this philosopher calls
a “depoliticised space”.

1
Act 19/2013 of December 9, on transparency, access to public information and good governance. Official
State Gazette (December 10, 2013). Preface, paragraph 1.
http://www.boe.es/buscar/act.php?id=BOE-A-2013-12887.
2
Constitutional Law 7/2015, of July 21, which amends Constitutional Law 6/1985, of July 1 of the Judiciary.
Official State Gazette (July 22 2015). Art. 440, p. 61618.
https://www.boe.es/boe/dias/2015/07/22/pdfs/BOE-A-2015-8167.pdf

257
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Byung-Chul expresses this idea as follows:


"Transparency is inherently positive. It does not harbour negativity that might radically
question the political-economic system as it stands. It is blind to what lies outside the
system. It confirms and optimises only what already exists. For this reason the society of
positivity goes hand in hand with the post-political. Only depoliticised space proves
wholly transparent. Without reference, politics deteriorates into a matter of referendum.
The general consensus of the society of positivity is "Like". It is telling that Facebook has
consistently refused to introduce a "Dislike" button. The society of positivity avoids
negativity in all forms because negativity makes communications stall"1.
In this order of things, Bentham, in his work Panopticon, designed a prison architecture to
control individuals. Today the physical panopticon has declined in importance, as the
digital society has absolute and unprecedented control over individuals without the need
for this prisonlike institution, which the author describes as follows:
“A penitentiary house according to the plan I propose should be a circular building, or
rather two buildings set one inside the other. (…) The whole of this building is like a
beehive, whose cells can all be seen from a central point. Invisible, the inspector prevails
–spirit-like– over all, but in case of need, this spirit may immediately make manifest its
real presence. This penitentiary house could be called a Panopticon, which expresses its
fundamental utility in a single word, namely the faculty of seeing at a single glance
everything that occurs within”2.
Along the same lines, Foucault studies Bentham’s writings and states as follows:
“There are two images, then, of discipline. At one extreme, the discipline-blockade, in the
enclosed institution, established on the edges of society, turned inwards towards negative
functions: arresting evil, breaking communications, suspending time. At the other
extreme, with panopticism, is the discipline-mechanism: a functional mechanism that
must improve the exercise of power by making it lighter, more rapid, more effective, a
design of subtle coercion for a society to come. The movement from one project to the
other, from a schema of exceptional discipline to one of generalised surveillance...”3.
The digital panopticon of today’s society is reinforced by laws regulating information
transparency in the public sector. Universities have become a part of this panopticon, as
they are indeed exposed so they can be observed and controlled, and take their place on
the “longest main street in the world”, as Bill Gates defines the Internet.
Universities cannot hide from the glare of the Internet, the searches, bids and virtual visits
from Google, the social networks, with all the advantages in terms of publicity, and its
drawbacks in the shape of possible damages and violations (damage to corporate image,
identity theft, false claims, scams, online harassment and so on). On this point it is worth
looking more closely at the details of the contracts for signing up to the social networks. If
we take the example of Facebook, we learn that after accepting their contract, the
maximum amount that can be claimed in the case of litigation is $100, or –if greater– the
sum we have paid in the last 12 months; and that in all cases, to achieve satisfaction we

1
BYUNG-CHUL, Han. La sociedad de la transparencia. Barcelona. Herder Editorial, S.L. 2013. pp. 22 y 23.
2
BENTHAM, Jeremy. El Panóptico. Ediciones la Piqueta. 1989. pp. 36 y 37.
3
FOUCAULT, Michel. Vigilar y castigar. 15ª edición. Madrid. Siglo XXI Editores, S.A. 1988. p. 212.

258
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

would need to go to the Courts in the District of Northern California or to a state court in
San Mateo County. When we sign up with the social network we supposedly agree that
these courts should have the authority to resolve any possible lawsuits. The contract for
signing up to Facebook specifically states the following:
“Your privacy is very important to us. We designed our Data Policy to make important
disclosures about how you can use Facebook to share with others and how we collect and
can use your content and information. We encourage you to read the Data Policy, and to
use it to help you make informed decisions. (…)
You will resolve any claim, cause of action or dispute (claim) you have with us arising
out of or relating to this Statement or Facebook exclusively in the U.S. District Court for
the Northern District of California or a state court located in San Mateo County, and you
agree to submit to the personal jurisdiction of such courts for the purpose of litigating all
such claims. The laws of the State of California will govern this Statement, as well as any
claim that might arise between you and us, without regard to conflict of law provisions”.
Then, in the third section they highlight, in capital letters, that they are not “LIABLE TO
YOU FOR ANY LOST PROFITS OR OTHER CONSEQUENTIAL, SPECIAL,
INDIRECT, OR INCIDENTAL DAMAGES ARISING OUT OF OR IN CONNECTION
WITH THIS STATEMENT OR FACEBOOK, EVEN IF WE HAVE BEEN ADVISED
OF THE POSSIBILITY OF SUCH DAMAGES”1.
Recently this clause accepting the submission of lawsuits to the US courts has been called
into question. A French teacher, an art lover and painting enthusiast, published a photo of
Courbet’s painting “The origin of the world”. Facebook judged the image to be
pornographic and cancelled his account. In response he took the case to the High Court in
Paris as he considered the clause to be abusive, and they agreed with him. Subsequently,
on February 12 this year, the Court of Appeal confirmed the competence of the French
legal system to judge the American giant Facebook. The claimant is also suing for
payment of €20,000 in damages, arguing that Facebook's action constituted an act of
censorship that violated his freedom of expression.
Another important aspect of the information and data held in universities is their security.
Computerised attacks on institutions and the leakage of documents can cause significant
damage. Notable examples include the data published in the press by Wikileaks, the
international media organisation that releases leaked documents containing secret and
sensitive material of public interest; Snowden's leaked documents and information from
the US National Security Agency; the so-called Panama papers; and Vatileaks, the
publication of secret Vatican documents concerning cases of bribery and corruption. The
leaks in this latest scandal include the Pope's private correspondence and its subsequent
publication. All these incidents have rocked the Vatican and its power structure, and
affected even the upper echelons. A reflection of the pressure deriving from this situation
is the fact that never before have there been two popes, one emeritus and the other in
active service.
Universities have also been subjected to multiple attacks on their servers, users,
passwords and websites, which underlines the critical function of data and information
1
https://es-es.facebook.com/legal/terms/update

259
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

protection and its impact on the very existence of the institution. Security is an asset and a
fundamental principle of university management.
Another aspect of the digital market that is certain to have a significant effect on
universities is the fact that the information produced in the university sphere will lose
some of its impact and supremacy within important research groups. What is known as
collective intelligence will move in to occupy the vacuum left by universities. Collective
intelligence refers to a system where people learn, think and act in important projects
outside formal institutions. Examples of this new line of action include Wikipedia, and the
recent classification of the galaxies –a work which before was the domain of the
universities, but is now done with the collaboration of around 100,000 volunteers. This
phenomenon is occurring simultaneously with the current crisis in the process of creating
contents. The emergence of non-creative writing theorists or –what amounts to the same
thing– converting appropriation into a creative act is an issue that is currently being
studied in one of the leading universities in the world, the University of Pennsylvania.
The writings and website of Professor Kenneth Goldsmith are particularly interesting on
this point, as they look at the processes of creation, appropriation and transformation of
the material available on Internet.
Another interesting issue for the online university is the practice of what is known as
cyberdemocracy in the institution. This democratic system will enable direct contact with
the whole community through vertical relations, thus allowing greater participation and
legitimacy for power. This increased participation may also have the drawback of
destroying all the intermediate fabric over time. For this reason, some authors such as
Pérez Luño are now analysing cyberdemocracy as a political system in which control over
the individual takes precedence over democratic participation. This author states the
following:
“There is a suspicion that teledemocracy promotes the vertical structuring of socio-
political relationships. From this standpoint, the theory is that teledemocracy (or
democracy at a distance) may be a vehicle for the progressive depersonalisation and
political alienation of the citizens. It has been observed that instant and permanent
referendum or voting would reinforce a system of «vertical communication» between
citizens and their governors, instead of favouring channels for «horizontal
communication». The tele-democratic system would lead to the depletion of the content –
and ultimately to the abolition– of the intermediate structures and associative relationships
between the State and the individual, in which human beings, as the social animals they
are, become realised. Intermediate groups would thus be eroded and dissolved (political
parties, unions, associations or collective civic movements), which are precisely the
elements that reinforce and unite civil society and the fabric of community relationships
that conform it”1.
Following along the same lines, another point worth mentioning is that frequent use of the
technological media may transform the individual into “a glass man”, as described in the

1
PÉREZ LUÑO, Antonio-Enrique. ¿Ciberciudadaní@ o ciudadaní@.com? Editorial Genisa. January 2004.
p. 85.

260
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

decision of the German Constitutional Court in the case concerning the opposition to the
population Census Law in 19831.

III. CONCLUSION

Universities must adapt to the information society in order to continue spearheading some
of the quality information that is produced in these institutions. Another important
function of universities is to participate and collaborate in training the million of new
professionals that will be required in the information technology sector in 2020, according
to data from Eurostat.
Finally, it is worth noting that universities, in spite of the budgetary constraints to which
they have been subjected –sometimes due to the crisis–, should not remain on the margins
of these changes, but engage fully in training future professionals and taking the lead in
achieving the goals of the Information Society. There are numerous challenges facing
universities in the 21st century –they will be required to be digital, open, transparent and
crystal-clear. These challenges particularly concern the treatment, transparency, creation
and negotiation of data and information. The goals of the digital university must be
approached from the point of view of respect for individual freedoms and for citizens’
fundamental rights.

REFERENCES

[1] Giner De Los Ríos, Francisco. Escritos sobre la Universidad española. Edición de
Teresa Rodríguez de Lecea. Madrid. Espasa Calpe-Colección Austral. 1990. p. 117.
[2] Mayer-Schönberger Kenneth Cukie, Víktor. Big data: La revolución de los datos
masivos. Turner Publicaciones S.L. 2013. p. 224.
[3] Executive Summary of the Opinion of the European Data Protection Supervisor on
"Meeting the challenges of big data: a call for transparency, user control, data
protection by design and accountability", The full text of this opinion can be found
in English, French and German on the EDPS website www.edps.europa.eu , 2016/C
67/05) Official Journal of the European Union 20-02 2016.
[4] Public Utilities Commission v. Pollak, 343 U.S. 451, 467 (1952) (William O.
Douglas, dissenting opinion).
[5] https://open-data.europa.eu/en/data/
[6] Act 19/2013 of December 9, on transparency, access to public information and
good governance. Official State Gazette (December 10, 2013). Preface, paragraph
1. http://www.boe.es/buscar/act.php?id=BOE-A-2013-12887.

1
Bulletin of Constitutional Jurisprudence. IV Foreign Constitutional Jurisprudence. German Constitutional
Court. Decision of December 15, 1983 against the Census Law. Right of personality and human dignity.
https://www.google.es/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-
8#q=Bolet%C3%ADn+de+Jurisprudencia+Constitucional.+IV+Jurisprudencia+Constitucional+Extranjera.+T
ribunal+Constitucional+Alem%C3%A1n.+Sentencia+de+15+de+diciembre+de+1983+Ley+del+Censo+Dere
cho+a+la+personalidad+y+dignidad+humana

261
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[7] Constitutional Law 7/2015, of July 21, which amends Constitutional Law 6/1985,
of July 1 of the Judiciary. Official State Gazette (July 22 2015). Art. 440, p. 61618.
https://www.boe.es/boe/dias/2015/07/22/pdfs/BOE-A-2015-8167.pdf
[8] Byung-Chul, Han. La sociedad de la transparencia. Barcelona. Herder Editorial,
S.L. 2013. pp. 22 y 23.
[9] Bentham, Jeremy. El Panóptico. Ediciones la Piqueta. 1989. pp. 36 y 37.
[10] Foucault, Michel. Vigilar y castigar. 15ª edición. Madrid. Siglo XXI Editores, S.A.
1988. p. 212.
[11] https://es-es.facebook.com/legal/terms/update
[12] Pérez Luño, Antonio-Enrique. ¿Ciberciudadaní@ o ciudadaní@.com? Editorial
Genisa. January 2004. p. 85.
[13] Bulletin of Constitutional Jurisprudence. IV Foreign Constitutional Jurisprudence.
German Constitutional Court. Decision of December 15, 1983 against the Census
Law. Right of personality and human dignity.
https://www.google.es/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-
8#q=Bolet%C3%ADn+de+Jurisprudencia+Constitucional.+IV+Jurisprudencia+Co
nstitucional+Extranjera.+Tribunal+Constitucional+Alem%C3%A1n.+Sentencia+de
+15+de+diciembre+de+1983+Ley+del+Censo+Derecho+a+la+personalidad+y+dig
nidad+humana

262
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

A SURVEY ON AUGMENTED REALITY

Iuliana Andreea Sicaru 1*


Ciprian Gabriel Ciocianu 2
Costin-Anton Boiangiu 3

ABSTRACT

The aim of this paper is to present the concept of Augmented Reality (AR) and a summary
of the approaches used for this technique. Augmented Reality is a technique that
superimposes 3D virtual objects into the user's environment in real time. We analyze the
technical requirements that are addressed in order to provide the user with the best AR
experience of his surrounding context. We also take into account the specificity of certain
domains and how AR systems interact with them. The purpose of this survey is to present
the current state-of-the-art in augmented reality.

KEYWORDS: Survey, Augmented Reality

1. INTRODUCTION

Augmented reality is a technique that overlays some form of spatially registered


augmentation onto the physical world. The user can see in real time the world around him,
composited with virtual objects. These virtual objects are embedded into the user's world
with the help of additional wearable devices. The difference between augmented reality
(AR) and virtual reality (VR) is that the former is taking use of the real environment and
overlays virtual objects onto it, whereas VR creates a totally artificial environment. In
other words, AR adds virtual information to the real world, whereas VR completely
replaces the real world with a virtual one.
The motivation for this technology varies from application to application, but mostly it
provides the user with additional information that he cannot obtain using only his senses.
Because AR has the potential to address different problems, reputed corporations such as
Google, IBM, Sony, HP and many universities have put their efforts to develop it.
Augmented Reality is suitable for applications in almost every subject, especially physics,
chemistry, biology, mathematics, history, astronomy, medicine, and even music. These
big companies are working to develop suitable technologic devices that can accommodate
to any of these subjects and that can ultimately impact the user's life.

1*
corresponding author, Engineer, ”Politehnica” University of Bucharest, 060042 Bucharest, Romania,
andreeasicaru03@gmail.com
2
Engineer, ”Politehnica” University of Bucharest, 060042 Bucharest, Romania, cipriantk@gmail.com
3
Professor PhD Eng., ”Politehnica” University of Bucharest, 060042 Bucharest, Romania,
costin.boiangiu@cs.pub.ro

263
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

AR also has a big impact in education and it is probable to change the way students will
learn in the future. A study performed by [1] concluded that not only the students'
understanding of a lesson increased when using augmented reality, but they were also
more motivated and engaged into learning more.
The additional 3D virtual information will represent a powerful tool in the user's life,
because it has the ability to support and improve their senses and their efficacy. It will
impact the way the user learns, travels, talks, plays, treats some diseases and even the way
he feels the food's smell or flavors.
The first part of this paper presents what Augmented Reality is and the motivation behind
this technology. The second part focuses on what hardware and software has been
developed for this technology and how it is best put to use and the last part focuses on
some of the applications of this technology.

2. TECHNOLOGY

Hardware

The hardware components used for AR are the wearable devices that allow the user to see
and interact with the system. These components are: displays, sensors, processors and
input devices. The display offers the user an instant access to the AR environment and it
is usually a form of lightweight see-through optical device. The sensors are usually
MEMS (Micro-Electro-Mechanical Sensors) and they are useful in the tracking process.
The processor is the one that analyzes the visual field and responds to it according to the
AR's application. The input devices are consistent to the application's needs and represent
the way the user interacts with the AR environment. Modern mobile devices like
smartphones and tablet computers include these elements which makes them suitable for
integrated Augmented Reality.
Display Devices
The user can see the virtual world through various display devices, such as: head-mounted
display (HMD), hand held devices, monitors or any optical projection systems.

264
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. A closed-view head-mounted display. (Photo: [2])

The standard Head-Mounted Display (HMD), shown in Figure 1, is closed-view and it


does not allow the user to see the real world directly. It is similar to a helmet, placed on
the forehead and it is mostly used for aviation applications, but it can also be used in
gaming, engineering and medicine. These closed-view displays use computer-generated
imaginary superimposed on the real world view. They usually use one or two small video
cameras and the display technology varies from cathode ray tubes (CRT), liquid crystal
displays (LCDs) to organic light-emitting diodes (OLED) [3]. One disadvantage of these
closed-view helmets is that in case the power is cut off, the user is unable to see.
The see-through HMDs allow light to pass through them and in case the power is cut off,
they act like sunglasses. They are compact devices, lightweight, monocular or binocular
and allow instant access to information either by optical system or video. As seen in
Figure 2, semi-transparent mirrors reflect the computer-generated images into the user's
eyes [4].

265
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 2. See-through HMD (Photo: [5])

There are two main techniques that exist for see-through HMD: curved combiner and
waveguide. The curved combiner diverge light rays for a wider field-of-view (FOV). The
diverged rays need to travel to a single point, the user's eye, for a clear and focused image
[6]. This technique is used by Vuzix’s personal display devices (Figure 2) in applications
for 3D gaming, manufacturing training and military tactical equipment [7]. It is also used
by Laster Technologies products that combine the eyewear with interaction functions such
as: gesture recognition, voice recognition and digital image correlation [8]. The
waveguide or light-guide technique includes diffraction optics, holographic, polarized
optics, reflective optics and projection. This technique is used in many applications and
many companies such as Sony, Epson, Konica Minolta, Lumus, etc. have chosen it as a
suitable technology for their devices.

Figure 1. Google Glass (Photo: [9])

Head-up display (HUD) is a transparent display, lightweight and compact that can show
additional data and information and enables the user to remain focus on his task, without
taking out too much of his field of view. It was firstly used by pilots to show basic
navigation and flight information. Google Glass, shown in Figure 3, is one of the most
well-known HUD devices that have a touchpad, a camera and a display. This device
allows the user to take pictures, go through old photos and events, offers information
about weather and different news. The device benefits from a high resolution display that
is the equivalent of a 25 inch HD screen, it can take photos up to 5MP and shoot videos
using 720p resolution [10]. It also comes with a dual-core processor, 2GB of RAM, 12GB
of usable memory, a 570 mAh internal battery, Wi-Fi and Bluetooth and its own operating
system, Glass OS. Google provides APIs for Google Glass, available for PHP, Java and
Python [11].

266
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 4. Microsoft Hololens (Photo: [12])

Another HUD device is Microsoft HoloLens, shown in Figure 4, device that allows the
user to interact with the colorful virtual objects, project different objects onto the floor
and even manufacture objects. It is Microsoft’s revolutionary AR device and packs some
interesting hardware. It has see-through holographic lenses (waveguides), 2 HD 16:9 light
engines, automatic pupillary distance calibration and holographic resolution, 2.3M total
light points. It comes with four environment understanding cameras, one depth camera,
one 2MP photo/ HD video camera, four microphones and one ambient light sensor [13].
Microsoft Hololens uses 32bit architecture processors with a custom built Microsoft
Holographic Processing Unit (HPU 1.0) and benefits from 2GB RAM and 64GB of flash
memory. As OS uses Microsoft 10 and it comes with APIs for Visual Studio 2015 and
Unity.
Another AR device is the virtual retinal display (VRD), a personal display device under
development at the University of Washington's Human Interface Technology Laboratory.
This technology allows for the capture of real world to be directly scanned onto the retina
of a viewer's eye. The viewer sees what appears to be a conventional display floating in
space in front of them.
EyeTap, presented by [14], also known as Generation-2 Glass, captures rays of light that
would otherwise pass through the center of the user's lens, and substitutes synthetic
computer-controlled light for each ray of real light. The Generation-4 Glass (Laser
EyeTap) is similar to the VRD (i.e. it uses a computer controlled laser light source) except
that it also has infinite depth of focus and causes the eye itself to, in effect, function as
both a camera and a display, by way of exact alignment with the eye and resynthesis (in
laser light) of rays of light entering the eye.
Because mobile devices are really powerful and can act as small computers, they are
great to display AR application. Not only do they offer the display technology, with big
enough LCD screens, but they also offer cameras, GPS, processors and other sensors.
Usually, the phone's camera is used to capture the real world and with the help of AR
application, the virtual objects are superimposed in real time onto the phone's display,
as shown in Figure 5.

267
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 5. Augumented reality displayed onto a handheld device. (Photo: [15])

Tracking devices
In augmented registration, tracking devices are used for registration. By registration,
virtual objects generated by a computer are merged into the real world image. The
computer needs to have a powerful CPU and great amount of RAM in order to be able to
process the images and interact accordingly with the user. In [16] it was identified that for
AR, the tracking devices should be: mechanical, magnetic sensing, GPS (Global
Positioning System), ultrasonic, inertia and/ or optics. These devices have different
ranges, resolution, time response and setup that combined can generate different levels of
accuracy and precision [7].
Input and interaction
According to [7], the interaction between the user and AR world can obtained by: tangible
interfaces, collaborative interfaces, hybrid interfaces or emerging multimodal interfaces.
Tangible interfaces allow the user to interact with real world objects. VOMAR
application, developed by [17], allows users to rearrange the furniture inside a room. The
user's gestures represent intuitive commands and he can select, move or hide different
pieces of furniture. Another input for the AR environment can be sensing gloves that
provides tactile feedback. These gloves can use a range of sensors to provide the hand's
position or can use vibration motors to simulate different surfaces. Senso Devices is a
company that produces Senso Gloves especially for virtual reality, shown Figure 6. Mark
Zuckerberg, CEO of Facebook, announced that gloves will be used in the near future to
draw, type on a virtual keyboard and even play games.

268
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 6. Tactile data glove SensAble's CyberTouch (Photo: [18])

Collaborative interfaces allow multiple users to connect and share the same virtual
objects. This enhances teleconferences by sharing between participants the same 3D-
windows, display platforms, documents, etc. The application presented in [19] allows
remote videoconferencing and can be integrated in the medical field, where multiple users
can access the same patient and discuss the diagnosis and treatment course.
Hybrid interfaces allow the user to interact with the system through different interaction
devices. A mixed reality system can be configured for the user's desire and it can adapt
accordingly.
Multimodal interfaces combine real object with speech, touch, hand gestures or gaze. A
sixth sense wearable gestural interface that allows the user to interact with the projected
information onto any uniform surface was proposed in [20]. Another method is presented
by [21] and it allows the user to interact with the system by gazing or blinking. It is a
robust, efficient, expressive and easily integrated method that is currently receiving a lot
of attention from the research labs.

Software

The software used for AR is mostly focused on the application specificity. It uses real
world coordinates offered by the tracking devices and camera images. Augmented Reality
Markup Language (ARML) developed within the Open Geospatial Consotium (OGC) is
used to create XML from the coordinates in order to obtain the location information. By
image registration, the captured images are analyzed. This process belongs to the
computer vision field and it is mostly based on video tracking methods and algorithms.
There are some software development kits (SDK) for AR offered by Vuforia, ARToolKit,
Layer, Wikitude, Blippar and Meta that enables developers to build their own AR
environment.
Usually these methods consist of two parts. The first stage detects the interest points,
fiducial markers or optical flow in camera images. This step can use feature detection
methods like corner detection, blob detection, edge detection or thresholding and/ or other

269
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

image processing methods. The second stage restores real world coordinate systems from
the data obtained in the first page. Some methods assume objects with known geometry
are presented in the scene (Vuforia). In some of those cases the scene 3D structure should
be pre-calculated. If part of the scene is unknown simultaneous localization and mapping
(SLAM) can map relative positions as mentioned by [22]. If no information about scene
geometry is available, structure from motion methods like bundle adjustment are used.
Mathematical methods used in the second stage include projective (epipolar) geometry,
geometric algebra, and rotation representation with exponential map, Kalman and particle
filters, nonlinear optimization and robust statistics.

3. APPLICATIONS

AR is a technology suitable for innovative and creative solutions for many problems. The
user's perception of life can be enhanced by bringing virtual information to his immediate
or indirect real surrounding. Although some research, such as [23] considers that AR is
limited to the display technology, AR systems can be developed to apply not only to the
sight sense but also to touch, smell, hearing or any combination of them. Because of this
reason, AR has a wide range of applicability.

Navigation

One of the first AR applications and probably the most used one is in navigation. With the
help of GPS data, an AR system can overlay the best route to get from point A to point B.
Wikitude Drive, shown in Figure 7, is an application that uses GPS data and with the help
of the user's mobile phone, the selected route is displayed over the image in front of him.

Figure 7. Navigation: Wikitude Drive (Photo: [24])

Medical environment

In medicine, doctors can use AR to diagnose, treat and even perform surgery. By using
endoscopic cameras inside the patient's body, the doctors can see in 3D the region of
interest and can perform image guided surgery, as suggested by [25]. This endoscopic

270
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

augmentation can be applied in brain surgery, liver surgery, cardiac surgery and
transbronchial biopsy as suggested by [26], [27] and [28]. This type of augmentation is
obtained by using tracking system and a virtual penetrating mirror that can visualize at
least one virtual object, as presented in Figure 8.

Figure 8. The 3D image of a cancer suffering kidney (Photo: [29])

Another practical application of AR in medicine include training and educating doctors in


a more immersive manner, as shown in Figure 9. ARnatomy is an application that wants
to replace the use of textbooks, flash-cards and charts and wants to present the student
with virtual 3D image of the human's skeleton. AccuVein is another application that
projects the circulatory system onto the patient's body in order to make it easier for the
nurse to drain blood. AR can be used also in plastic surgery, where the patient can have an
intuitive sight of the reconstruction results.

Figure 9. A model of human's organs for a Biology class (Photo: [30])

271
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

AR can also be used in patient's treatment and therapies. Patients suffering from
depression, anxiety, addiction or mental health conditions can be treated using different
AR environments. It can also be treated to cure different phobias. For example, patients
suffering from fear of heights can be treated by making them used to walking on virtual
glass floor in tower buildings. The same can be done for patients suffering from
arachnophobia and they can be put in an environment where they have to get used to
spiders, as shown in Figure 10.

Figure 10. An example of AR application used for treating arachnophobia (Photo: [31])

Education

Besides the already mentioned tools in which a student can learn medicine, there are also
other fields that improve the way a person studies. 3D models can appear from a textbook
that gives a better perspective over the subject of study. Elements 4D is an application
that allows students to trigger chemistry elements into images. Arloon Plants is another
application that students can use for biology lessons and they can learn about structures
and parts of plants. Math alive is based on marker cards that trigger some exercises of
counting and numeracy skills.

Figure 11. Example of a biology textbook using Augmented Reality (Photo: [32])

272
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

A survey completed by [33] presents the way students can interact with their lessons and
also how the teaching methods can improve by using AR. The teacher can use AR to
display different annotation that can help him with his course as presented in [34].

Entertainment

There are a lot of AR applications that exist for entertainment. From cultural apps, with
sightseeing and museum guidance, to gaming and many smart-phone apps, AR can
enhance the user's experience. While visiting a museum, the user can use a mobile phone
to project a multimedia presentation about what he is seeing. Or as presented by [35] the
user can virtually see the reconstruction of ancient ruins and have an intuitive feeling
about how the ruins looked back in the time, as shown in Figure 12. Wikitude World
Browser is an app that overlays information about stores, hotels, scenery and touristic
locations in real time.

Figure 12. Example of a view with the reconstruction of the Dashuifa's ruins (Photo: [7])

For gaming, the AR can offer more than the other physical board game by introducing
animation and other multimedia objects. Pokemon Go is an example of AR app, in which
the player needs to walk as much as possible and look for pokemons. Other games are
marker-based and with the help of some cards the user can see 3D objects. Piclings is an
iOS game in which the iPhone's user takes a picture, redefines it digitally and incorporates
it into the actual game. Junio Browser is a famous German app in which the user needed
to point the smart-phone to the TV and answer a quiz. This games was nationwide spread
and a lot of users started to compute against each other for the big prize. Zoombie
ShootAR from Metaio is an AR game where the players need to shoot the zombies that
are superimposed into the real world through their mobile device. Lego's offers the
possibility to see Lego products by simply scanning some cards from their website. Once
a webcam is put in front of the computer screen, a 3D Lego object will appear.
Another application for AR is in advertising and commercial, as shown in Figure 13.
Most techniques are marker-based in which the user needs to point to the advertising card
that will trigger an animation or a presentation of a product.

273
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 13. Example of a 3D virtual model of MINI car. This 3D object appeared as the user
pointed to the marker trigger (Photo: [7])

Military

From navigation to combat and simulation, AR has applicability also in military field. The
first head-mounted display gave pilots information about velocity, positioning and other
navigation information. Afterwards, AR was used to offer a better visualization of targets
and point of interest in combats. AR can also offer extra information to the soldiers by
using IR (infrared) cameras for night vision or cameras sensible to heat that can show if
someone is hidden nearby.
AR can be used also for battle planning, where more soldiers are connected to the same
interface and they can see in 3D the plan of the battle and decide the best way to take
action, as shown in Figure 14.

Figure 14. Example of AR application with the help of Hololens. In the picture, the two soldiers
share the same environment, a map on which they can make battle presumptions (Photo: [36])

274
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Assembly and manufacturing

In order for a product to come to life, there are many steps through which it needs to go:
planning, design, ergonomics assessment, etc. A survey done by [37] on the AR
application in assembly. Boeing used the first AR assembly system for guiding
technicians in building the airplane's electrical system. Another comprehensive survey
was performed by [38] and [39] about the use of augmented reality in manufacturing
industry, in which graphical assembly instruction and animation can be shown and [40]
wrote about the use of AR in design and manufacturing. State-of-the-art methods for
developing CAD (computer-aided designs) models from digital data acquisition, motion
capture, assembly modeling and human-computer interaction were presented by [41].
Figure 15 shows an example of a CAD assembly in AR.

Figure 15. Example of virtual door lock assembly (Photo: [28])

Robot path planning

Teleoperation is the process in which an assembly is controlled from distance. A robot


can be controlled from long distance and it can execute tasks already programmed. But,
because there might exist long distance communication problems, it may be better to
control a virtual version of the robot. AR allows for this to happen and the user can see in
real time the results of his manipulations, as shown in Figure 16. These virtual
manipulations can predict some errors that might appear in reality and improve their
performance. Robot Programming using AR (RPAR) is a form of offline programming
that uses a video-tracking method from ARToolKit to eliminate a lot of calibration issues.

275
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 16. Virtual lines that show the planned motion of a robot arm (Photo: [34])

Pervasive Augmented Reality

A survey was concluded by [42] about the future goal of augmented reality, pervasive
augmented reality (PAR). PAR aims to offer a continuous AR experience to the user, with
as little interaction as possible. If standard AR was a context-aware experience, PAR's
purpose would be to sense the current context of the user and adapt accordingly. So far,
most of AR applications are developed to address one problem, with a specific solution, for
a specific domain. PAR systems aims for an AR technology that can learn from the user's
experience and context and adapt to it, without the user's interaction. But PAR, being a
continuous AR experience, has some hardware challenges and also of ethics. From the
hardware point of view, the system needs to be able to collect and process a lot of data in
real time, in order to offer a reliable solution to whatever the current context and situation
might be. Also, the collected data needs to be safe to use and respect the privacy of others.

CONCLUSION

Throughout this survey, the AR technology was presented, taking into account both the
technology behind it and its applicability. A lot of work was already developed for this
method, but taking into account its evolution and its possibilities, a lot more will be
developed in the future years. Just as personal computers and smartphones changed the
life of all the users, it is expected that all the wearable devices with AR technology will
also have a huge impact. The future expectancy of this technology is PAR, the continuous
AR experience and an easy-to-use technology.

BIBLIOGRAPHY

[1] Y. S. Hsu, Y. H. Lin and B. Yang, "Impact of augmented reality lessons on students’
STEM interest," Research and Practice in Technology Enhanced Learning, vol. 12,
no. 1, pp. 1-14, 2017.
[2] Wikipedia, "Head-mounted display," [Online]. Available: https:// en.wikipedia.org/
wiki/ Head-mounted_display. [Accessed 08 May 2017].

276
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[3] W. Barfield, Fundamentals of wearable computers and augmented reality, CRC


Press, 2015.
[4] J. P. Rolland, R. L. Holloway and H. Fuchs, "Comparison of optical and video see-
through, head-mounted displays," in Photonics for Industrial Applications, 1995,
pp. 293-307.
[5] C. Christiansen, "Flip the Media," [Online]. Available: http:// flipthemedia.com/
2012/ 01/ video-glasses-augmented-reality/ . [Accessed 11 May 2017].
[6] K. Kiyokawa, "A wide field-of-view head mounted projective display using
hyperbolic half-silvered mirrors," in 2007 6th IEEE and ACM International
Symposium on Mixed and Augmented Reality, Nara, Japan, 2007.
[7] J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani and M. Ivkovic,
"Augmented reality technologies, systems and applications," Multimedia Tools and
Applications, vol. 51, no. 1, pp. 341--377, 2011.
[8] R. Hamdani and Z. Liu, "Portable augmented-reality head-up display device,"
Google Patents, 2015.
[9] Wikipedia, "Google Glass," [Online]. Available: https:// en.wikipedia.org/ wiki/
Google_Glass. [Accessed 22 May 2017].
[10] Google, "Google Glass," [Online]. Available: https:// support.google.com/ glass/
answer/ 3064128?hl=en. [Accessed 17 May 2017].
[11] Google, "Quick Start," Google Glass, [Online]. Available: https://
developers.google.com/ glass/ develop/ mirror/ quickstart/ . [Accessed 17 May 2017].
[12] Microsoft, "Microsoft HoloLens Development Edition," [Online]. Available:
https:// www.microsoftstore.com/ store/ msusa/ en_US/ pdp/
productID.5061263800. [Accessed 11 May 2017].
[13] Microsoft, "HoloLens hardware details," [Online]. Available: https://
developer.microsoft.com/ en-us/ windows/ mixed-reality/
hololens_hardware_details. [Accessed 20 May 2017].
[14] S. Mann, Intelligent image processing, John Wiley & Sons, Inc., 2001.
[15] T. Mike, "Foxtail Marketing: Virtual and Augmented Reality Marketing Strategies
for the coming decade," [Online]. Available: https:// foxtailmarketing.com/ virtual-
augmented-reality-marketing-strategies-coming-decade/ . [Accessed 2017 May 17].
[16] L. Yi-bo, K. Shao-peng, Q. Zhi-hua and Z. Qiong, "Development actuality and
application of registration technology in augmented reality," in Computational
Intelligence and Design, 2008. ISCID'08. International Symposium on, vol. 2,
IEEE, 2008, pp. 69-74.
[17] H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto and K. Tachibana, "Virtual
object manipulation on a table-top AR environment," in Ieee, 2000.
[18] D. Van Krevelen and R. Poelman, "Augmented Reality: Technologies, Applications
and Limitations," 2007.

277
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[19] I. Barakonyi, T. Fahmy and D. Schmalstieg, "Remote collaboration using


augmented reality videoconferencing," in Proceedings of Graphics interface 2004,
Canadian Human-Computer Communications Society, 2004, pp. 89-96.
[20] P. Mistry, P. Maes and L. Chang, "WUW-wear Ur world: a wearable gestural
interface," in CHI'09 extended abstracts on Human factors in computing systems,
ACM, 2009, pp. 4111-4116.
[21] J.-Y. Lee, S.-H. Lee, H.-M. Park, S.-K. Lee, J.-S. Choi and J.-S. Kwon, "Design
and implementation of a wearable AR annotation system using gaze interaction," in
International Conference on Consumer Electronics (ICCE), Digest of Technical
Papers, 2010 , pp. 185-186.
[22] P. Mountney, D. Stoyanov, A. Davison and G.-Z. Yang, "Simultaneous stereoscope
localization and soft-tissue mapping for minimal invasive surgery," in International
Conference on Medical Image Computing and Computer-Assisted Intervention,
Springer, 2006, pp. 347-354.
[23] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier and B. MacIntyre, "Recent
advances in augmented reality," IEEE computer graphics and applications, vol. 21,
no. 6, pp. 34-47, 2001.
[24] J. E. Swan and J. L. Gabbard, "Quantitative and qualitative methods for human-
subject experiments in Virtual and Augmented Reality," in Virtual Reality (VR),
IEEE, 2014, pp. 1-6.
[25] C. Bichlmeier, S. M. Heining, M. Feuerstein and N. Navab, "The virtual mirror: a
new interaction paradigm for augmented reality environments," IEEE Transactions
on Medical Imaging, vol. 28, no. 9, pp. 1498-1510, 2009.
[26] N. Navab, C. Bichlmeier and T. Sielhorst, Virtual penetrating mirror device for
visualizing of virtual objects within an augmented reality environment, Google
Patents, 2012.
[27] R. Shahidi, M. R. Bax, C. R. Maurer, J. A. Johnson, E. P. Wilkinson, B. Wang, J. B.
West, M. J. Citardi, K. H. Manwaring and R. Khadem, "Implementation, calibration
and accuracy testing of an image-enhanced endoscopy system," IEEE Transactions
on Medical imaging, vol. 21, no. 12, pp. 1524-1535, 2002.
[28] D. Reiners, D. Stricker, G. Klinker and S. Muller, "Augmented reality for
construction tasks: Doorlock assembly," Proc. IEEE and ACM IWAR, vol. 98, no.
1, pp. 31-46, 1998.
[29] D. Teber, S. Guven, T. Simpfendorfer, M. Baumhauer, E. O. Guven, F. Yencilek, A.
S. Gozen and J. Rassweiler, "Augmented Reality: A New Tool To Improve Surgical
Accuracy during Laparoscopic Partial Nephrectomy? Preliminary In Vitro and In
Vivo Results," European Urology, vol. 56, no. 2, p. 332–338, 2009.
[30] K. Lee, "Augmented reality in education and training," TechTrends, vol. 56, no. 2,
pp. 13--21, 2012.

278
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[31] M. C. J. Lizandra, "Augumented reality used for treating phobias," [Online].


Available: http:// users.dsic.upv.es/ ~mcarmen/ ar-phobia-small-animals.html.
[Accessed 16 May 2017].
[32] "Example of augumented reality in text books," [Online]. Available: http://
www.twoguysandsomeipads.com/ p/ augmented-reality.html. [Accessed 10 May
2017].
[33] O. Pasareti, H. Hajdu, T. Matuszka, A. Jambori, I. Molnar and M. Turcsanyi-Szabo,
"Augmented Reality in education," INFODIDACT 2011 Informatika
Szakmodszertani Konferencia, 2011.
[34] R. T. Azuma, "A survey of augmented reality," Presence: Teleoperators and virtual
environments, vol. 6, no. 4, pp. 355-385, 1997.
[35] Y. Liu and Y. Wang, "AR-View: An augmented reality device for digital
reconstruction of Yuangmingyuan," in IEEE International Symposium on Mixed
and Augmented Reality-Arts, Media and Humanities, 2009. ISMAR-AMH 2009.,
IEEE, 2009, pp. 3-7.
[36] NextReality, "Royal Australian Air Force Using HoloLens to Experiment with
Augmented Reality," [Online]. Available: https:// hololens.reality.news/ news/
royal-australian-air-force-using-hololens-experiment-with-augmented-reality-
0175955/ . [Accessed 10 May 2017].
[37] X. Wang, S. Ong and A. Nee, "A comprehensive survey of augmented reality
assembly research," Advances in Manufacturing, vol. 4, no. 1, pp. 1-22, 2016.
[38] S. K. Ong and A. Y. C. Nee, Virtual and augmented reality applications in
manufacturing, Springer Science & Business Media, 2013.
[39] P. Fite-Georgel, "Is there a reality in industrial augmented reality?," in International
Symposium on Mixed and Augmented Reality (ISMAR), 2011 10th, IEEE, 2011,
pp. 201-210.
[40] A. Y. Nee, S. Ong, G. Chryssolouris and D. Mourtzis, "Augmented reality
applications in design and manufacturing," CIRP Annals-manufacturing
technology, vol. 61, no. 2, pp. 657-679, 2012.
[41] M. C. Leu, H. A. ElMaraghy, A. Y. Nee, S. K. Ong, M. Lanzetta, M. Putz, W. Zhu
and A. Bernard, "CAD model based virtual assembly simulation, planning and
training," CIRP Annals-Manufacturing Technology, vol. 62, no. 2, pp. 799-822, 2013.
[42] J. Grubert, T. Langlotz, S. Zollmann and H. Regenbrecht, "Towards pervasive
augmented reality: Context-awareness in augmented reality," IEEE transactions on
visualization and computer graphics, 2016.
[43] Y.-S. Hsu, Y.-H. Lin and B. Yang, "Impact of augmented reality lessons on students’
STEM interest," Research and Practice in Technology Enhanced Learning, vol. 12,
no. 1, pp. 1-14, 2017.

279
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

SYSTEM ANALYSIS OF ROMANIA’S INTRADAY ENERGY MARKET

Alexandra Maria Ioana Corbea (Florea) 1*


Adina Uţă 2

ABSTRACT

The article presents a system analysis of Romania’s Intraday energy market, highlighting
the description of the trading mode, the modeling of the activities on the market through
the UML diagrams as well as the operating rules and indices and indicators of market
activity analysis.

KEYWORDS: energy markets, Intraday market, analysis

1. INTRODUCTION

One of the main focus areas of the current European power trading environment is
represented by the short-term markets, the marketing and trading of electricity on these
markets rapidly growing in importance.
Countries like Germany or the UK have seen an significant increase in the intradaily
energy transactions so trade signals can occur rapidly and the volume and velocity of
information available to traders can be almost overwhelming. As mentioned in [1] a
number of factors such as the rapid and massive move in generation to more
unpredictable renewable sources, the impact of smart grid and smart devices on the
demand side and the European Union’s push for a single energy market have all
contributed to the rapid rise of the intra-day power markets. As such, automated trading
systems represent one of the tools used for meeting the challenges and demandes that
arise from the situation, namely an increase in both speed and volume of data traded, as
mentioned in [2]. A functioning intraday market will increase the efficiency of the
balancing market. It will allow better deployment of resources if unit commitment can be
rescheduled and balancing resources used only when needed.
[3] clarifies that this development is driven by legal regulations and the increasing
proportion of volatile, renewable production capacity, but also by attractive earnings
potential in the new markets.
These changes of focus on the various energy markets offer new opportunities but also
new challenges for traders and energy producers entering this new playing field,

1*
corresponding author, Assistant Professor PhD, Faculty of Cybernetics, Statistics and Economic
Informatics, Bucharest University of Economic Studies, Bucharest, alexandra.florea@ie.ase.ro
2
Professor PhD, Faculty of Cybernetics, Statistics and Economic Informatics, Bucharest University of
Economic Studies, Bucharest, adina.uta@ie.ase.ro

280
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

challenges that lead directly to additional requirements for IT systems in terms of trade
and optimisation.
This paper presents a part of the researches on the electricity market in Romania carried
out within the "Intelligent system for trading on the wholesale electricity market"
(SMARTRADE) project, funded by the National Authority for Scientific Research and
Innovation through European Regional Development Fund (ERDF), namely a system
analysis of Romania’s Intraday energy market.

2. ELECTRICITY MARKETS

Electricity markets operate at different levels, varying in time (from real-time balancing
markets to long-term contracts), geographical location (from local offers to wholesale
trans-national markets) and customer type (wholesale markets or retail markets that
address consumers directly).
Within the retail market, the actors are the suppliers that offer electricity contracts
approved by the competent regulatory authority and the consumers who have the right to
choose their supplier. Suppliers buy electricity from producers (generators) and sell it to
consumers. Suppliers send invoices at the perceived price for delivered, transmitted and
distributed electricity, as well as taxes and charges that are sometimes used to support the
production of renewable energies, protect more vulnerable consumers or promote other
policy objectives. Suppliers differentiate their bids depending on the price or origin of
electricity.
In the wholesale market, the participants are producers (generators), electricity suppliers
(who may at the same time be producers) as well as large industrial consumers.
Electricity differs from most of the other goods in that it has to be produced when it is
needed because it can not be stored easily. Therefore, most electricity transactions involve
the supply of electricity at some point in the future.
Depending on the type of contract or market, transactions can cover different periods of
time:
• long-term contracts: up to 20 years or more;
• on future markets: weeks or years in advance;
• on the day-ahead market: the next day;
• on the intraday market: delivery within a specified time period (e.g., one hour);
• on the balancing market: balancing in real time supply and demand.
Electricity can be traded privately between two parties or can be sold through an energy
exchange that brings together more buyers and sellers and offers transparent prices.
Energy exchange rates vary according to supply and demand: on the wholesale market,
they may increase at peak demand or may fall to zero or even less in cases of excess
supply.
The purchase of electricity by suppliers from producers or other suppliers, for the purpose
of resale or use for their own consumption, takes place in Romania in an organized
framework, represented by the Wholesale Electricity Market.

281
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2. ROMANIA’S INTRADAY ENERGY MARKET

The Intraday Electricity Market (IM) is a component of the wholesale electricity market
where hourly transactions with active electricity are made for each delivery day starting
from the day previous to delivery day (after the transaction on the day-ahead market have
been concluded) and up, with a certain amount of time before delivery or consumption
begins [4].
The purpose of this market is to help balance the contracted surplus or power deficit
(imbalances that appear due to transactions on the day-ahead market) by selling or buying it.
Within the Intraday Market, trading can be done for any calendar day at hourly trading
intervals in which each market participant (seller or buyer) can submit bids or sales offers
for each trading session.
Electricity sale or purchase offers are bids (in the case of sale) or orders (in case of
purchase) of quantity-price type.
For each of the 24 hourly delivery times of each day of the year, a trading instrument is
defined (except for the days when the change of daylight savings time from summer and
winter time respectively, for which there are defined 25 and respectively 23 instruments).
Within the Trading System, SC Opcom SA establishes a unique alphanumeric
identification code of the form INThhddmmyy (where h is the delivery time interval, d is
the delivery day, m is the month and y the year) for each instrument, code on the basis of
which in the trading system, participants can obtain information on the time horizon for
which the transaction is made, as well as the Day / Month / Year of delivery.
The trading process is as follows: the participants enter into the trading system, for each
delivery time interval, purchase offers or distinct sales bids consisting of quantity-cost
pairs, selecting from the list of market instruments the instrument created for the time
interval and introducing in the system the offer type (buy or sell), the quantity to be traded
(in MWh, positive numbers with a maximum of 3 decimal places) and the proposed price
(in Lei, positive numbers with a maximum of 2 decimal places). Each offer is
automatically assigned a unique identification number and a time stamp of the form "hh:
mm: ss" specifying the hour (h), minute (m) and second (s) of the offer entry. In the
bidding process, an hourly offer may be accepted wholly or partly depending on the
market conditions and the conditions of the calculated validation offer, diminished by the
value of the introduced bidding offer. During the trading session, Participants may enter,
modify, withdraw for further reactivation or cancel submitted bids.
Each purchase offer entered into the system will automatically be compared with the
validation warranty and will be rejected if the bid value exceeds the value of the
validation guarantee for IM. diminished by the value of the previously entered purchase
bids. If a Participant has received a rejection message of the entered Purchase Offer, he
may enter a modified Purchase Offer so that the bid value does not exceed the guarantee,
or he may change and/or cancel the Purchase Orders previously entered so as to create the
opportunity to enter new purchase offers that meet the guarantee validation condition. The
offers that fulfill the compatibility condition (purchase price higher or at least equal to the
sale price, or lower sale price or at most equal to the purchase price) are automatically
linked, once, at the end of the time interval which allows entering / modifying / canceling

282
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

bids. The correlation process ends when all the quantity of the compatible orders has been
traded.
Any offer entered into the system may be canceled, modified or suspended by the market
participant. Any change to an offer involves automatically canceling the initial offer data
and updating the data for the new bid with the time stamp of the change. At this stage all
participants can view the ten best purchase offers as well as the ten best sale offers, keeping
the anonymity of the participants who have introduced these offers, as well as the position
of their own offers within the market, which are marked separately.
During this stage the offers are validated in terms of the value of the guarantees. The
amount of the existing guarantee will diminish, iteratively, with the value of the purchase
bids entered and the value of the VAT; if at any given time the guarantee is less than the
value of the bid offered, it will be invalidated. After the trading session ends, SC Opcom
SA sends the transaction confirmations to the Participants through the trading system.
Participants may dispute the results confirmed in the Transaction Report published by the
trading system. If no appeal has been lodged, the transaction is deemed assumed. The
submitted complaints are analyzed and resolved in the sense of accepting or rejecting
them. In the case of accepted disputes, the affected transactions will be canceled, both on
the sales side and on the purchase side.

3. MAIN ACTIVITIES ON THE INTRADAY MARKET

A summary of the activity flow is comprised of a series of main steps, for which
subactivities can easily be identified.
The first step in working on the Intraday market is the definition of trading instruments by
SC Opcom SA, after which the Registration of participants takes place. The registration
process contains the following:
• Receiving the registration request submitted to the license holders
• Verifying submitted documents, sending to license holders additional information
and correction of invalidated information
• Registration of license holders as participants in IM
• Submission by the participants of the bank payment guarantee letter
Once the registration process has finished the participants can move on to submitting
offers and validating bids:
• Entering buy and / or sales bids in the trading system
• The possibility of definitive cancellation, modification (ie cancellation of the
offer and its registration as a new offer with changed date) or suspension of bids
by the Participants
• Sorting bids based on price (ascending for sale offers and descending for purchase
offers) and time stamp for bids of the same type with the same price
• The possibility to view the top ten purchase and /or and sale offers.

283
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

• Validation of offers in terms of collateral value (if the bank guarantee - the value
of the outstanding payment obligations - the VAT amount - the value of the
purchase offers valued up to that moment <the value of the purchase offer
entered, the offer will be invalidated)
• Notification of Participants to IM. whose offers have been invalidated
Following the submition of bids by the participants is the correlation of offers and
transaction notification. This stage consists of:
• At the time of opening of this stage if in the Trading System, there is an offer /
hourly offer / times of purchase with a price greater than or equal to a bid / hourly
offer / times of purchase, the Trading System will automatically correlate them.
The price at which transactions are concluded is that of the purchase offer (s).
• Transmission to the Participants on the IM. of Transaction Confirmations.
If the participants are unahppy with the results then are allowed to register appeals. Once
registered the appeals are analysed and a resolution must be reached:
• If the appeal has been accepted, we will proceed to the cancellation of the
transactions affected by the error, both on the sales side and on the purchase side
• If there are disputes that can not be resolved by the deadline for settlement of
appeals (14:30), the Transaction Confirmations and Physical Notifications of SC
Opcom SA become mandatory
Once the potential appeals have been resolved, the system provides to the Balancing
Responsible Parties and transmites to the TSO the physical notifications, after which
calculating and issuing the settlement notes and recalculating the available bank
guarantees takes place.
The messages exchanged between the participants, the system and the market operator are
illustrated in Figure. 1.

284
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. Sequence diagram for the activity flow on the intraday market

As every other component of the wholesale energy market, the Intraday market must
adhere to a set of rules that governs it’s activity. The most important ones are:
• The trading day is any calendar day;
• The trading time is the hour;
• A IM participant may submit bids and offers for each trading period;
• Electricity sale / purchase offers are simple quantity-price quotes / orders;
• The matching algorithm takes into account the criteria for ordering bids submitted
by market participants, decreasing by price, for buy orders or price-ascending, for
sales orders
• In the case of orders offering equal prices, the timestamp of each order is taken
into account;
• The correlation process will begin with the highest bid and with the order of sale
with the lowest price and will continue taking into account the ordering criteria;
• Transactions are concluded at the price of response bids to a counter offer
existing in the trading system.
• The hourly offer consists of a quantity-price pair and is an offer for a single
hourly delivery schedule with the firm price and quantity;

285
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

• In the automatic process of offers correlation and establishment of transactions by


the Trading System of IM, the hourly offer may be accepted in whole or in part
depending on the market conditions and the conditions of the offer;
• Participants will enter separate bids for each hourly delivery schedule by selecting
the instrument created for the desired delivery time interval from the market
instrument list;
• At the time of receiving a rejection message for the submitted purchase offer, the
Participant may take the following actions: to enter a modified bid so that the bid
value does not exceed the calculated validated guarantee diminished with the
value of the previously entered purchase bids; modify and / or cancel purchase
orders previously entered so as to create the possibility of introducing new
purchase offers that fulfill the validation condition against the value of the
validation guarantee;
• Offers that meet the compatibility condition (purchase price higher or at least
equal to the sale price, or sale price lower or at most equal to the purchase price),
are correlated by an automated process conducted by the trading system, once at
the end of the time period in which it is allowed to enter / modify / cancel bids;
• In the correlation process the trading system of IM complies with sorting rules
based on price and time stamp;
• Purchase offers will be correlated in descending order of the respective bid price,
the first correlated purchase offer will be one with the highest price;
• Sales bids will be correlated in ascending order of the respective bid price; the
first correlated sale offer will be the one with the lowest price;
• The correlation process will end when all the quantity of the compatible orders
has been traded;
• The price at which transactions will be finalized as a result of the correlation rules
automatically applied by the market’s trading system. is the price of the purchase
bids.

4. MODELING THE MARKET

While analysing the market one of the more useful tools we cand use in order to get a
thorough understanding of the functionalities ours system must provide, is the Use Case
Diagram. In the next section of the paper we have developed the general use case
Diagram for the market, as well as a couple of detailed diagrams for the most important
identified use cases. Also we have created a textual description of these use cases, using
the UML standardized template [5].

286
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 2. General Use Case Diagram for IDM

Figure 3. Use Case Diagram for Registration Participants

287
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Use case element Description


Code CU21
Name Registration Participants
Status Sketching
Scope Registration of participants on the IDM transaction
Main actor OPCOM
Description OPCOM registers the license holders who submitted a registration
request, after checking the documents and the bank guarantee
Precondition -
Postcondition Transmission of documents
Trigger Desire of the IDM Participant to participate in IDM trading on a
given date
Base flow - Receipt of registration requests submitted by license holders
- Verifying the documents submitted,
- Transmission to license holders additional information
- Registration of license holders as participants in IDM
- Participants submit a bank guarantee of payment
Alternative flows If after verification of the documents there are invalidated
information, the license holders will be informed that they will
correct and retransmit
Relations The trading system
Frequency of use Daily
Business rules -

288
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 4. Use Case Diagram for Transmitting offers and purchase offers validation

Use case element Description


Code CU22
Name Transmitting offers and purchase offers validation
Status Sketching
Scope Inserting offers to purchase and / or sale in the trading system and
informing the participants on the best ten offers for sale and the best
ten purchase offers
Main actor IDM Participant
Description Market participants enter the sale / purchase offers (which they can
later modify or cancel them); offers will be order by the price, and
then the ten best offers for sale and the top ten purchase offers are
viewed; offers will be validated from the point of view of the
guarantee and the participants whose offers have been invalidated
will be notified
Precondition Participants to be registered
Postcondition Linking offers
Trigger Transmission of buy / sell offers by participants

289
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Base flow a. Introducing purchase and / or sales offers in the trading system
b. The possibility of definitive cancellation, modification (ie
cancellation of the offer and its registration as a new offer with a
changed date) or suspension of offers by the Participants
c. Sorting offers based on price (increasing for sale offers and
declining for purchase offers). If the offers are on the same type
and have the same price, the ordering will also be made
according to the time stamp
d. Possibility to view the best ten purchase offers and the ten best
offers for sale
e. Validation of offers in terms of guarantee value (bank guarantee
- value of unpaid payment obligations - value of VAT - value of
purchase offers valued up to that moment (value of the purchase
offer entered will be invalidated)
f. Notification of IDM Participants whose offers have been
invalidated
Alternative flows -
Relations Linking offers
IDM participant
OPTCOM
Frequency of use Daily
Business rules 1. The trading day is any calendar day;
2. The trading time is the hour;
3. A IDM participant may submit sale / purchase offers for each
trading period;
4. Electricity sale and purchase offers are quantity / price orders /
offers;
5. The hourly offer consists of a quantity-price pair and is an offer
for a single delivery time interval with the firm price and
quantity.
6. Participants will enter separate bids for each delivery time
interval by selecting the instrument created for the desired
delivery time interval from the Market Instrument list.
7. When receiving a rejection message of the purchase offer
submitted, the Participant at P.I. May take the following actions:
to introduce a modified bid so that the bid value does not exceed
the Validated Validation Guarantee diminished by the value of
the previously entered purchase bids; Modify and / or cancel
purchase orders previously entered so as to create the possibility
of introducing new purchase offers that fulfill the validation
condition against the value of the validation qugrantees.

290
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

4. ANALYSIS INDICES AND INDICATORS

In order to analyze the activity performed on this market a set of indicators are used, as
illustrated in [METHODOLOGY OF WHOLESALE ELECTRICITY MARKET
MONITORING FOR ASSESSING THE COMPETITION LEVEL ON MARKET AND
PREVENTING THE ABUSE OF DOMINANT POSITION]:
1. Weighted average price (lei / MWh) and volume traded [MWh] over a range of
analysis (day / hour).
2. Wholesale energy market concentration indicators and their componenets -
according to economic theory, the following concentration indicators are defined:
• Herfindahl-Hirschman Index (HHI) is the sum of squared market shares of
participants that have finalized transactions (%)

where: n=nr of participants


Ms(i) = the market share %of participant
The significance of the indicator values is:
- HHI < 1000 unconcentrated market;
- 1000 < HHI < 1800 moderate concentration of market power;
- HHI > 1800 high market power concentration.
• Market concentration ratio (%) which is evaluated through 2 elements:

• C3 – total market share of top three market participants


The significance of the indicator values is:
- C3 →0% perfect competition
- 40%< C3 < 70% medium concentrated market;
- C3 > 70% highly concentrated market.
• C1 – market share of the largest market participant (%)
The significance of the indicator values is:
- C1>20% worrying market concentration;
- C1>40% suggests the existence of a dominant position on the market;
- C1>50% indicates a dominant position on the market.
These indicators can be calculated for the entire market (electricity, system technology
services - STS) or its components, on which competition is directly manifested.
3. Pivotal supplier index (PSI) – measures the limit to which an available bid to of a
participant is required to ensure the demand of the system after taking into account the
bids available to other participants.

291
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

r < 1 → PSI = 1 → determinant participant (equivalent to the absence of competition)

5. CONCLUSIONS

The description of the way of functioning, the modeling of the activities carried out on the
intraday market, the identification of the activities flow and the system of rules underlying
its operation represent a first step for the realization of a decision support system for the
participants in trading, as well as the activity analysis and forecast on the intraday market.
This system will also include other types of energy markets such as the centralized
contract market, the day-ahead market, the balancing market or the bilateral contracts
market.

6. ACKNOWLEDGMENT

This paper presents the scientific results of the project “Intelligent system for trading on
wholesale electricity market” (SMARTRADE),co-financed by the European Regional
Development Fund (ERDF), through the Competitiveness Operational Programme (COP)
2014-2020, priority axis 1 – Research, technological development and innovation (RD&I)
to support economic competitiveness and business development, Action 1.1.4 - Attracting
high-level personnel from abroad in order to enhance the RD capacity, contract ID
P_37_418, no. 62/05.09.2016, beneficiary The Bucharest University of Economic Studies.

7. REFERENCES

[1] FISGlobal - The State of Short-term Power Trading in Europe, 2017, [online],
Available at https://www.fisglobal.com/solutions/institutional-and-wholesale/-
/media/fisglobal/files/brochure/the-state-of-short-term-power-trading-in-
europe.pdf, [Accessed 10 November 2017]
[2] Iván Pineda, Paul Wilczek - Creating the Internal Energy Market, EWEA The
European Wind Energy Association, 2012, [online], Available at
http://www.ewea.org/uploads/tx_err/Internal_energy_market.pdf, [Accessed 10
November 2017], ISBN: 978 -2-930670-01-0
[3] Gary M. Vasey, Philippe Vassilopoulos, Chris Whellams, Dr. Simon Tywuschik -
Short-term trading – an attractive market with high IT requirements, 2017,
http://www.psi.de/en/psi-energymanagement/magazin/energy-trading-on-short-
term-markets, [Accessed 10 November 2017]
[4] Alexandra Maria Ioana Florea, Anda Belciu - Study on electricity markets in
Romania, Database Systems Journal, vol. VII, no. 4/2016, ISSN 2069 – 3230
http://dbjournal.ro/archive/26/26_2.pdf,
[5] Ion Lungu, Gheorghe Sabau, Manole Velicanu, Sisteme informatice: analiza,
proiectare si implementare, Editura Economica, Bucureşti, 2003, 526pg, ISBN
9735908301

292
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

SOLUTIONS FOR IMPLEMENTING THE N-BODY SIMULATION ON


THE PASCAL COMPUTE UNIFIED DEVICE ARCHITECTURE

Dana-Mihaela Petroşanu 1*
Alexandru Pîrjan 2

ABSTRACT

In this paper, we develop and propose novel solutions for implementing the N-body
simulation on the latest Pascal parallel processing Compute Unified Device Architecture
(CUDA) as to attain a high level of performance and efficiency. The innovative aspect of
our research emerges from the development and implementation of the N-body simulation
on the latest Pascal Compute Unified Device Architecture, making use of the latest
features of the CUDA Toolkit 8.0, employing the architecture’s dynamic parallelism
feature in order to effectively manage the unbalancing of the processing tasks that
appears once the number of corresponding bodies differs throughout the processing
threads.

KEYWORDS: Graphics Processing Unit (GPU), N-Body Simulation, Verlet-Leapfrog


Algorithm, CUDA, Dynamic Parallelism.

1. INTRODUCTION

Within this paper, we have proposed and developed innovative solutions that facilitate the
implementation of the N-body simulation ( ) on the latest Pascal Compute Unified
Device Architecture (CUDA), launched in 2016 by the NVidia company. Of particular
interest when developing the implementation was to attain a high level of performance
and efficiency. An efficient implementation of the N-body simulation has multiple
applications ranging from astrophysical simulation to a variety of computational tasks in
numerous scientific fields such as: fluid mechanics, medicine and computer graphics
applications.
In [1], the authors develop an implementation of the N-body simulation on the GeForce
GTX8800 NVidia Graphics Processing Unit. In [2], the authors depict how GPUs can be
used for N-body simulations, in order to obtain improvements in performance over the
Central Processing Units available in 2006. Thus, on an ATI X1900XTX, they develop an
algorithm for performing the force computations that represent the most part of stellar and
molecular dynamics simulations. In [3], the authors develop an implementation of N-body
simulation on the Intel Knights Landing Central Processing Unit architecture.

1*
corresponding author, PhD Lecturer Department of Mathematics-Informatics, University Politehnica of
Bucharest, 313, Splaiul Independentei, district 6, code 060042, Bucharest, Romania, danap@mathem.pub.ro
2
PhD Hab, Associate Professor Faculty of Computer Science for Business Management, Romanian-American
University, 1B, Expozitiei Blvd., district 1, code 012101, Bucharest, Romania, alex@pirjan.com

293
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Even though more implementations of the N-body simulation exist in the scientific
literature, most of these are confronted with serious limitations originating from the
required huge computational processing power. The novelty of our approach resides in
developing and implementing the N-body simulation on the latest parallel processing
Pascal Compute Unified Device Architecture, benefitting from the most powerful features
of the CUDA Toolkit 8.0, like the dynamic parallelism feature that helps us to solve
efficiently the unbalancing of the processing tasks that appears once the number of
corresponding bodies differs throughout the processing threads.
We have used the dynamic parallelism feature to call an additional kernel function with
the purpose of processing in parallel the last states of the bodies, consequently attaining a
high level of performance and efficiency for the developed solution. Although there are
many works in the literature that implement the N-body simulation, to our best
knowledge, up to this moment none of them have implemented the N-body simulation on
the latest Pascal Compute Unified Device Architecture, making use of the architecture's
dynamic parallelism feature.

2. THE N-BODY SIMULATION IN THE ALL-PAIRS APPROACH

In the following we depict the all-pairs approach of the N-body simulation ( ), a


technique based on the evaluation of all the interactions, considering all the possible pairs
for each of the N considered bodies [4]. Thus, the number of interactions is . In the
following, we denote the vectors with lowercase bold letters. Thus, for each positive
integer we denote by the initial 3D position and by the initial velocity
for each of the N bodies, by the force vector caused on the -body by the gravitational
attraction of the -body.
According to the Newton's law of universal gravitation, each of the N bodies attracts
every other body with a force that is directly proportional to the product of their masses
and inversely proportional to the square of the distance between them [5], as depicted in
the following equation:
(1)

where the indexes refer to two bodies, having the mases and , the module of
the vector that has the origin at the body and the extremity at the body , the module of
the force vector (the magnitude of the force), while is the gravitational constant
This equation can be written in the vector form, that
highlights the directions of the and vectors, using the unit vector :

(2)

Taking into account all the possible pairs between the body ( ) and all the
others N-1 bodies, one obtains the total force as the sum of all the interactions,
represented by the functions (except the case when , because in this case the
denominator of the equation (2) becomes zero):

294
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

(3)

Under the effect of the interactions, the bodies tend to move from their initial positions,
approaching each other. As the distances decrease, the forces grow without any
limit. This growing could become an impediment when applying numerical methods for
integration [6]. Generally, when using the N-body simulations in astrophysics, the
collisions between the N considered bodies are not possible. Even if the distances
decrease and tend to zero, the bodies (that represent galaxies), do not collide but pass near
or through each other. In order to avoid the unlimited growing of the forces, one adds a
softening positive factor, denoted by at the denominator of the equation (3):
(4)

When adding this factor, the condition is no longer necessary, because when ,
the vector and therefore, the force becomes zero, while the
denominator of is not zero. Using the softening factor, the magnitude of the
interactions between the bodies are limited and thus, the numerical integration is
facilitated.
Using the equation (4), one can express the acceleration of the -body as:
(5)

Taking into account the nature of the problem that is modeled using the
N-body simulation ( ), one can choose different integration methods [7]. In our case,
in order to obtain the current positions and velocities of each body, we have used the
Verlet-Leapfrog Algorithm, a computationally efficient algorithm applicable to our
problem. This algorithm is frequently used in molecular dynamics simulations, in N-body
simulations problems, in computer graphics.
The Verlet-Leapfrog Algorithm consists in a numerical method, useful for integrating the
Newton's equations of motion. Even if this method was previously used in 1792 by the
French mathematician and astronomer Jean Baptiste Joseph Delambre, it is consecrated as
the Verlet's algorithm, who has used it in molecular dynamics in 1967. This approach was
also applied in 1909 by Cowell and Crommelin, in order to compute the orbit of Halley's
Comet and in 1907 by Carl Störmer, in his study regarding the electrical particles'
trajectories in a magnetic field.
The Verlet integration method has some important advantages, arising from its properties
[8]. Thus, it provides a good numerical stability and time reversibility. Considering the
unpredictable nature of individual atoms or molecules motion, an important problem that
arises when modeling problems related to this motion is to use accurate and stable
integration schemes for the obtained ordinary differential equations. Moreover, the
number of equations can be very large, as they are 6 for each particle (3 for the
components of the position vector and 3 for the components of the velocity).

295
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

If considering particles, the total number of equations is , while the number of


interaction terms is in the case of pair-wise interactions. As a consequence, one
must use an algorithm that reduces to the minimum the necessary number of evaluations
that must be made on the right side of the obtained ordinary differential equations.
The Verlet integration schemes satisfy all of the above-mentioned requirements and
comprise three main different algorithms: Basic, Leapfrog and Velocity. After analyzing
and testing them, we have chosen for solving our problem the Verlet Leapfrog Algorithm
as it has offered the best results, features and implementation opportunities. In the
following, we will describe this algorithm.
The Verlet Leapfrog Algorithm can be obtained using a Taylor expansion of the position
in , to order , where is the time step:
(6)

(7)
By adding and subtracting the equations (6) and (7), one obtains:
(8)
(9)
In the following, we define:
(10)
Taking into account the equation (6), the equation (10) can be written:
(11)
The equation (10) implies that:
(12)
Taking into account the equation (7), the equation (12) can be written as:
(13)
By subtracting the equations (11) and (13), one obtains:
(14)
Equation (14) can be written as:
(15)
Afterwards, by adding the equations (11) and (13), one obtains:
(16)
From the equation (16) one obtains:

(17)

296
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Taking into account the equation (10), the equation (17) can be written as:

(18)
or, after calculations,

(19)
In the following, the Verlet Leapfrog Algorithm uses the half time step in order to obtain
accurate velocities. Thus, one defines:

(20)

where is the velocity. Using a Taylor expansion of the right member of the
equation (20), one obtains:
(21)

Similarly, considering:

(22)

and using a Taylor expansion of the right member of the equation (22), one obtains:
(23)

By dividing equation (11) with one obtains:

(24)
Taking into account the definition of and comparing the equations (21) and
(24) one can conclude that:

(25)

By dividing equation (13) with one obtains:

(26)
Considering the definition of and comparing the equations (23) and (26) one
can conclude that:

(27)

In conclusion, the velocities at time t can be computed by adding equations (21) and (23):

(28)
Subtracting the equations (21) and (23), one obtains:

297
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

(29)

Using equation (25) one obtains:


(30)

Using the notations the equation (30) can be written as:


(31)

The equations (28) and (29) give the velocities, while the equation (31) gives the positions
of the particles. In conclusion, in the leapfrog method the position is calculated at time
intervals that have the dimension of an integer multiple of the time step,
, while the velocity is evaluated at the times ,
between these points, starting from an initial point . The above-mentioned leapfrog
algorithm, requires less storage and is less expensive than other approaches when judging
it from the computational requirements point of view [8]. In the case of large scale
computations, these aspects represent important advantages for the programmer. The
Verlet Leapfrog Algorithm has also the advantage that, even at large time steps, the
conservation of energy is respected. Therefore, when this algorithm is used, one can
considerably decrease the computation time.
In the following, we depict our implementation of the N-body simulation ( ) on the
Pascal Compute Unified Device Architecture.

3. IMPORTANT ASPECTS REGARDING THE DEVELOPMENT OF THE N-


BODY SIMULATION IN THE PASCAL COMPUTE UNIFIED ARCHITECTURE

The most important aspects that we had to take into account when developing the N-body
simulation in the CUDA architecture comprised the proper management of the
synchronization process, of the atomic operations, of the race situations, as to circumvent
memory leakage and achieve a sufficient amount of dynamic parallelism. When one
develops N-body simulations ( ) that are targeted by the central processing unit
(CPU) and for which a single processing thread is sufficient, the whole process of
managing race conditions is extensively simplified. In these situations, the developer must
examine carefully the data flow as to identify if a specific value has been loaded from a
certain variable before storing the latest updated value in it.
The vast majority of the compilers that are being used these days for compiling software
applications that make use of a single processing thread have the technical capability to
pinpoint precisely these issues. When developers are programming applications that use
multiple processing execution threads, race conditions have to be methodically and
accurately averted. The threading mechanism implemented in CUDA is configured as to
achieve a high degree of performance without taking into consideration a chronological
order in which the kernels have been invoked and the threads executed.
Like in the case of the N-body simulation, when the state of an element at a certain step is
influenced by the computed result from a previous step, if the developer allocates a
processing thread for each body, for the result to be correct the threads would have to be

298
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

processed in an ascending order and the results of the previous execution steps to have
already been calculated and stored in the corresponding variables. When more execution
threads are processed in parallel, the risks become higher for the outcome to have errors,
in some situations the whole application might even crash.
What makes it more complicated is the fact that in some random situations the program
might produce a correct output if a processing thread has the possibility to compute and
store the result before another thread needs to retrieve it. All of these specific aspects
provide a valuable insight on the issue of a race condition, when certain functions of an
application are processing data simultaneously to a particular point in the execution path.
Therefore, we had to develop the N-body CUDA implementation by employing a
synchronization process as the order of execution in the device can vary to a great extent.
By using the synchronization process, we were able to transfer data between the threads
belonging to the same block and between multiple blocks that were part of the same grid
of thread blocks. We have used the local memory area and register memory available to
each of the threads. The shared memory that exists at the level of a block of threads
helped us to interchange data between the threads that resided in the same block.
When developing the N-body simulation ( ), we have used the “cuda-memcheck”
software instrument with the aim of identifying, isolating and solving the issues
concerning the memory leaks and over-usage of memory. In developing the N-body
implementation, we have taken into consideration the fact that the Pascal CUDA
architecture offers support for the dynamic parallelism feature and thus, we were able to
call, from an initial CUDA kernel, supplementary child CUDA kernels and synchronize
the processing. This technique helped us avoid having to invoke more kernel functions or
to keep always several threads idle for being used in the final steps of the computation. By
using the dynamic parallelism technique, we were able to save a huge amount of
computational resources and avoid inefficient results when computing a large number of
bodies.
Of particular usefulness when implementing the simulation using the dynamic parallelism
solution was the fact that we were able to configure and execute grids of blocks,
containing more processing threads and in the same time to postpone further processing,
until all the grids of the blocks have finalized the processing up to the precision of a
processing thread residing within a block of the grid. Therefore, we were able to program
a thread from within a grid of blocks, so that in certain situations to have the possibility of
invoking a new grid of thread blocks (a child grid of thread blocks) that belongs to the
initial parent grid of thread blocks. An advantage of the dynamic parallelism solution that
we have implemented in our approach consists in the nesting mechanism that is
implemented in the architecture and automatically checks that a parent grid completes the
processing only after the child grids have completed their tasks. Therefore, we made use
of the implicit synchronization mechanism that is enforced by the CUDA runtime on the
parent and child kernel functions.
Through the dynamic parallelism solution that we have implemented, we made sure that
the graphics processing units' resources were efficiently spent and that an appropriate
occupancy of the available resources was achieved, as the child kernel functions that were
called by the parent ones were processing the tasks in parallel with minimum control

299
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

divergence or even none whatsoever. When the number of bodies is small, we have
programmed the solution to process using only the parent kernel and not to invoke a
supplementary child kernel as there is not sufficient parallelism in this case to warrant the
invoking of other functions.
When implementing the N-body simulation ( ) in the Pascal CUDA architecture,
after having divided the tasks, we have allocated them to more blocks of processing
threads. We have tested extensively different methods for allocating the sizes of the grids
and of the processing blocks and we have reached peak performance using the following
approach:

(32)

and

(33)

where represents the number of allocated thread blocks, is the number of bodies,
represents the number of threads per block and is the integral part of the real
number .
In the following section, we present the experimental results that we have conducted and
an analysis of the obtained performance.

4. EXPERIMENTAL RESULTS AND PERFORMANCE ANALYSIS OF THE N-


BODY SIMULATION IMPLEMENTATION ON THE PASCAL ARCHITECTURE

We have developed and run an experimental test suite in order to check the performance
of the developed implementation of the N-body simulation and we have compared the
results obtained when benchmarking our developed implementation on the Pascal
architecture with those provided by a state of art sequential classical implementation of
the N-body simulation on the central processing unit.
In order to analyse the level of performance, we have developed and conducted a
benchmark suite, using as a hardware configuration the central processing unit Intel i7-
5960x operating at 3.0 GHz with 32GB (4x8GB) of 2144 MHz, DDR4 quad channel and
the GeForce GTX 1080 NVIDIA graphics card with 8GB GDDR5X 256-bit from the
Pascal architecture. The software configuration that we have used is Windows 10
Educational operating system and the CUDA Toolkit 8.0 with the NVIDIA developer
driver.
In our experimental tests, we have successively benchmarked different cases regarding the
number N of interacting bodies, ranging from 16 to 8,192 and different number of
execution steps, denoted by . In order to ensure the accuracy of the results, we
have run 100 iterations for each of the benchmark tests and afterwards we have computed
the average of these results. Thus, for each of the analyzed cases, we have computed the
average total execution time (measured in milliseconds) for the CPU implementation

300
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

(CPUT), for the GPU implementation (GPUT) and then we have also computed a relevant
metric, the CPUT/GPUT ratio for the corresponding number of execution steps. The
measured total execution times comprise the necessary time for computing, for each step,
the new positions, velocities and accelerations of the N interacting bodies.
The N-body problem represents an initial value problem, comprising the system of
differential equations (mentioned in section 2) and initial conditions. As our
implementation is based on parallel computations, in order to obtain, in each of the
analyzed situations, a comparable amount of computations, we have decided to use a
specific generator for the initial conditions (3 components of the position vector, 3
components of the velocity vector, the value of the mass for each body and the softening
positive factor ). Thus, in order to randomly generate the initial conditions, we have
used the newest version of NEMO (A Stellar Dynamics Toolbox, Version 3.3.2, released
in March 14, 2014) [9].
As the results obtained using different settings for the number of execution steps
( ) have provided similar performance results, in the following we
present and analyze the case when . We have synthetized these results,
highlighting for each of the considered values of N, the number of interactions, ,
the total execution times registered when running the N-body simulation on the CPU
(CPUT), on the GPU (GPUT) (both measured in milliseconds) and also the dimensionless
CPUT/GPUT ratio (Table 1).

Table 1. Experimental results for execution steps

Number Number of
No CPUT (ms) GPUT (ms) CPUT/GPUT
of bodies interactions
1 16 120 0.29 3.731 0.07773
2 32 496 1.128 7.114 0.15857
3 64 2,016 4.472 13.666 0.32725
4 128 8,128 17.826 26.414 0.67488
5 256 32,640 70.907 52.905 1.34028
6 512 130,816 270.602 103.978 2.60249
7 1,024 523,776 874.443 208.918 4.18558
8 2,048 2,096,128 3433.961 422.939 8.11928
9 4,096 8,386,560 14013.55 847.882 16.5277
10 8,192 33,550,336 54338.387 1693.389 32.0885

301
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In order to facilitate the comparison of the obtained experimental results, we have also
represented them in Figure 1 and Figure 2. Thus, in Figure 1 we have represented the
total execution times, while in Figure 2 we have represented the CPUT/GPUT ratio, for
execution steps.

Figure 1. The total execution times for execution steps

Figure 2. The CPUT/GPUT ratio for execution steps

Analysing the experimental results presented in the above table and figures along with the
results obtained for the other values of , we have concluded that when the number of
bodies is less than 256, the best results (the lowest execution time) have been recorded on
the CPU because in this case, the required computational load does not fully employ the
parallel processing power of the GPU.
In all of the analysed situations, when the number of interacting bodies is higher than
256, the GPU performance surpasses the CPU's one. As Figure 2 highlights, the
CPUT/GPUT ratio is sub-unitary for and supra-unitary for . As the

302
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

number N of bodies increases, this ratio also increases, and its greatest value is reached in
the case of when the average total execution time registered by the GPU is
more than 32 times lower ( ) than the average total execution time registered by
the CPU. We have registered similar results when choosing and execution
steps. Thus, for the highest value of the dimensionless CPUT/GPUT ratio was
, while for the highest value of this ratio was , both of these
values being registered for .
The experimental results that we have obtained outline that our solutions for
implementing the N-body simulation on the latest Pascal architecture attain a high level of
performance highlighted by the reduced execution times, when compared to the state of
art sequential classical approach. Thus, our implementation has the ability of becoming a
useful, powerful tool in a wide range of scientific domains that employ fast, accurate N-
body simulations.

5. CONCLUSIONS

In our research, we have developed and proposed novel solutions for implementing the N-
body simulation, harnessing the enormous parallel processing power of the latest Pascal
Compute Unified Device Architecture, in order to achieve a high level of performance
and efficiency. An important aspect of our research consists in employing the latest
technical characteristics of the CUDA Toolkit 8.0, leveraging the architecture’s dynamic
parallelism feature for balancing the computational tasks.
The obtained results reflect the efficiency of the developed solutions and their suitability
for implementing the CUDA Pascal N-body simulation, based on the Verlet Leapfrog
Algorithm, in various scientific fields, highlighting the undisputable advantages of our
solution, compared to the classical sequential approaches, when having to process a large
number of interacting bodies. We have conducted extensive experimental tests, choosing
various settings regarding the number N of bodies, the number n of execution steps,
computing in each of the cases the average of 100 iterations, in order to obtain relevant,
accurate, reliable results and a detailed analysis of our implementation. The experimental
suite highlights the reliability, efficiency and applicability of our developed solution
regarding the implementation of the N-body simulation in the Pascal architecture.
In the scientific literature one can find more implementations of the N-body simulation,
but when having to process a large number of bodies, most of them are limited by the
huge computational requirements. Our developed approach has the advantage and brings
the novelty of harnessing the dynamic parallelism feature and the huge computational
potential of the Pascal Compute Unified Device Architecture, offering in a reduced
execution time the accurate states of the N interacting bodies: positions, velocities and
accelerations. The proposed implementation of the N-body simulation, based on the
Verlet Leapfrog Algorithm, proves to be a useful tool in numerous scientific fields,
considering the high computational throughput, the obtained level of performance and
efficiency.

303
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

REFERENCES

[1] L. Nyland, M. Harris, J. Prins, "Fast N-Body Simulation with CUDA", in GPU
Gems 3, chapter 31, Addison Wesley, Boston, 2007.
[2] E. Elsen, M. Houston, V. Vishal, E. Darve, P. Hanrahan, V. Pande, "N-Body
simulation on GPUs", in Proceedings of the 2006 ACM/IEEE conference on
Supercomputing (SC '06), ACM, New York, 2006.
[3] J. Jeffers, J. Reinders, A. Sodani, "N-Body simulation", in Intel Xeon Phi Processor
High Performance Programming, Morgan Kaufmann, Boston, 2016.
[4] J.A. Franco R., The N-Body Problem: Classic and Relativistic Solution:
Corrections to: Newton's Gravitational Force for N>2, and Einstein's relativistic
Mass & Energy, under a 3-D Vectorial Relativity approach, CreateSpace
Independent Publishing Platform, North Charleston, 2016.
[5] T.Levi-Civita, The n-Body Problem in General Relativity, D. Reidel Pub. Co.,
Dordrecht, 1964.
[6] T. Burgess, The n-Body Problem, ChiZine Publications, Toronto 2013.
[7] K. Meyer, G. Hall, D. Offin, Introduction to Hamiltonian Dynamical Systems and
the N-Body Problem (Applied Mathematical Sciences), Springer, New York, 2009.
[8] S. J. Aarseth, Gravitational N-Body Simulations: Tools and Algorithms (Cambridge
Monographs on Mathematical Physics), Cambridge University Press, 2009.
[9] P.J. Teuben, "The Stellar Dynamics Toolbox NEMO", in: Astronomical Data
Analysis Software and Systems IV, PASP Conf. Series, vol. 77, 1995.

304
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

THE ELIMINATION METHOD OF FOURIER-MOTZKIN IN LINEAR


PROGRAMMING

Manolis Kritikos 1*
Dimitrios Kallivokas 2

ABSTRACT

The paper applies the elimination method of Fourier-Motzkin in a production problem in


Linear Programming. Following the Fourier-Motzkin elimination method, we
successively eliminate variables of the model that solve a linear programming problem
until its final solution. The method helps us to find the optimum solution of a linear
programming problem without using Linear Programming methodology.

KEYWORDS: elimination method of Fourier-Motzkin, decision making, optimization,


linear programming, system of inequalities.

INTRODUCTION

The following problem is a simple production problem (G. Prastacos, 2008) of a business
that produces two products, named A and B. Those two products are processed by two
departments of the enterprise, respectively named T1 and T2, in order to be produced. The
available operating hours of the two processing departments per month are limited and are
2100 and 1800 hours respectively. Furthermore, the production time of each product type
is different in each department and is given in Table 1 below.
Table 1: Employment of the departments per unit of output

PRODUCT DEPT Τ1 DEPT Τ2

Α 2 3

Β 3 2

In our problem, there are additional restrictions concerning claims about customer
satisfaction and the limited capacity of storage areas which results in the production of
only 400 units of product A and in the production of at least 300 units of product B per
month (M. Kritikos and G. Ioannou, 2013). The enterprise has to determine the number of
products A and B that should be produced in order to optimize the total profit from the
disposal of these 2 products. The operational profit from the sale of products A and B is

1*
corresponding author, Management Sciences Laboratory, Athens University of Economics and Business,
47A Evelpidon and 33 Lefkados St. 8th Floor, Athens 113-62, Greece, kmn@aueb.gr
2
Technological Educational Institution of Athens, dimkalliv@yahoo.gr

305
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

respectively 5 and 3 credit units. In order to delineate the set of feasible solutions of the
problem of the production problem we will use the following inequalities arising from the
problem data: Let A be the number of products A that we produce and B the number of
products B, respectively. Because every product of type A requires to be processed for
two hours in department T1 and every product of type B requires to be processed for three
hours in department T1 and the total availability is 2100 hours, we will apply the
following inequality in the problem: (2 hours per product A) x (number of products A) +
(3 hours per product B) x (number of products B) ≤ total hours available, namely:
2A +3 B ≤ 2100 (1)
Similarly, the inequality in relation to the operation of department T2 would be:
3A +2B ≤ 1800 (2)
Additionally, because the production should include only 400 units of product A and at
least 300 units of product B, the inequalities expressing respectively the restrictions are A
≤ 400 (3) and B ≥ 300 (4). Of course, it is obvious that the restrictions of nonnegative
output have to be applied, that is A ≥ 0 (5) and B ≥ 0 (6).

FOURIER-MOTZKIN ELIMINATION METHOD

We apply the Fourier-Motzkin elimination method (Dantzig, 1963) in order to determine


the optimal solution of the problem. We suppose that we earn 5 and 3 credit points from
the sale of products A and B, respectively. In this case our profit occurs from the function
z = 5xA +3xB (7). Afterwards we search for a solution which on the one hand, satisfies
both (1), (2), (3), (4), (5) and (6) inequalities whereas on the other hand, gives to z its
largest credit that does not exceed 5 A + 3B that is, the solution of the problem occurs
from the solution of the following system of inequalities:
2 A + 3B ≤ 2100 (1)
3 A + 2B ≤ 1800 (2)
A ≤ 400 (3)
B ≥ 300 (4)
A≥0 (5)
B≥0 (6)
z ≤ 5 A + 3B (7)
Following the Fourier-Motzkin elimination method (Dantzig, 1963) for the solution of the
above system, we initially eliminate variable A. For this reason, the system could be
written as:

306
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

3
A ≤ 1050 − B
2 (1)
2
A ≤ 600 − B
3 (2)
A ≤ 400 (3)
B ≥ 300 (4)
A≥0 (5)
B≥0 (6)
z 3
A≥ − B
5 2 (7)
It should be observed that the above inequalities relating to variable A, can be grouped:
into those in which A is larger than a linear relationship, in those that A is smaller and
those that do not exhibit the variable A. Of course, it is evident that the values of the first
group are smaller than the values of the second group. Thus, we get the following system:
z 3 3
− B ≤ 1050 − B
5 5 2 (1)
z 3 2
− B ≤ 600 − B
5 5 3 (2)
z 3
− B ≤ 400
5 5 (3)
3
1050 − B ≥ 0
2 (4)
2
600 − B ≥ 0
3 (5)
400 ≥ 0 (6)
B≥0 (7)
B ≥ 300 (8)
Repeating the process in order to eliminate the variable B gives us the following system
of inequalities:

307
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

10 z
B≤ (1050 − )
9 5 (1)
z
B ≤ 15(600 − )
5 (2)
5 z
B ≥ ( − 400)
3 5 (3)
2
B ≤ 1050
3 (4)
3
B ≤ 600
2 (5)
B≥0 (6)
B ≥ 300 (7)
Grouping the inequalities as in case of variable A results in inequalities of variable z
whose solution results in the value of z. For example the combination of (1) and (3) gives
us the inequality:
5 z 2000 10 z
( − ) ≤ (10500 − ) / 9 ⇒ z ≤ 3300
15 3 5 .
Similarly, the other inequalities of the method occur:
5 z z
( − 400) ≤ 15(600 − ) ⇒ z ≤ 2900
3 5 5
5 2 2
( − 400) ≤ 1050 ⇒ z ≤ 4100
3 5 3
5 2 3
( − 400) ≤ 600 ⇒ z ≤ 4700
3 5 2
z
10(1050 − ) / 9 ≥ 0 ⇒ z ≤ 5250
5
z
10(1050 − ) / 9 ≥ 300 ⇒ z ≤ 3900
5

308
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

It is evident from the above solutions that the solution for z is z = 2900. We put z=2900 in
the inequalities system before the elimination of B so that B = 300. Then we set the values
z = 2900 and B = 300 in the original inequalities system so that A=400. Namely, we have
z = 2900, A = 400 and B = 300. We confirm the solution using the PHPSimplex tool
http://www.phpsimplex.com/simplex/simplex.htm?l=en).

CONCLUSION

The paper applies the elimination method of Fourier-Motzkin in Linear Programming.


Working through a system of inequalities showed the usefulness of mathematics in an
application of a simple operational problem. The method helps us to find the optimum
solution of a production problem without using linear programming.

REFERENCES

[1] G. B. Dantzig., (1963) Linear Programming and Extensions, Princeton University


Press
[2] M. N. Kritikos and G. Ioannou, (2013) The heterogeneous fleet vehicle routing
problem with overloads and time windows, International Journal of Production
Economics, 144, 68-75
[3] G. P. Prastacos, (2008), Managerial Decision Making Theory and Practice, Tsinghua
University Press
[4] http://www.phpsimplex.com/simplex/simplex.htm?l=en

309
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

THE STATE AND DYNAMICS OF ECOECONOMY IN ROMANIA.


REMARKS AND PERSPECTIVES

Oana Mihaela Văcaru 1*


Cristina Teodora Bălăceanu 2
Mihaela Gruiescu 3

ABSTRACT

The economy has become more than just a science that is studied in any education system,
as the organic part of the social sciences; it is a state of affairs, a state of mind and a way
of life of the contemporary man. In any case we have found out, we are concerned with
the price of the product showing utility for us, so far as the nominal net income allows us
to purchase or not, the way of the development of certain indicators affect the level of
salary, how it will affect the workplace change of coordinates, of the monetary policy and
exchange rate, what impact have certain statements of dignitaries on the oil market. All
these are determinations of the economic behaviour of the economy subjects, namely
consumers and producers.

KEY WORDS: ecoeconomy, development, economic indicators.

We cannot exclude the existence of our economic size just as we cannot exist if we do not
consume and we cannot consume without having to produce. This dictum is not shared by
all the stakeholders in the world economy, and there are, it seems, the cicadas economy,
focusing on savings consumption exceeds output, and ant economy, with pronounced
character, whose production capacity and export consumption is higher than the import,
respectively.
Why this dichotomy? At first glance, you might say that with factors of production
equipment would differentiate economies, in terms of the size and structure of the
aggregate supply. Also, the size and dynamics of the needs is likely to favour
disadvantaged or an economy in its relations with other economies. Or, as it is stated
many times in the socio-political, geopolitical and geostrategic position, there are factors
determining the ratio of forces between economies. Without exceptions, all these factors
must be discussed in terms of mutual interdependence, complementarity, and complicity.
As a result of the development of cutting-edge technologies, the scientific research, the
unprecedented influence of media on behaviour of consumer decision-making, the ratio

1*
corresponding author, PhD Candidate, the Bucharest Academy of Economic Studies, Romania,
oanna.vacaru@gmail.com
2
Professor of Economic, “Dimitrie Cantemir" Christian University in Bucharest, movitea@yahoo.com;
3
Associate Professor of Statistics and Econometrics, Romanian American University / Academy of Economic
Studies in Bucharest, mihaela.gruiescu@csie.ase.ro.

310
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

between resources and needs, the availabilities and needs, suffers the strong distortion,
which makes the assault on economic production of goods to meet the needs, to be
particularly strong. This assault has direct repercussions on how to combine economic
resources, both own and acquired, and respectively needs convergent implications of the
productive sector in order to supply to the demand level. The problem lies, on the one
hand, in the efficient management of own resources and syndicated in the context of the
continuous reduction of the conventional energy resources and the adjustment of demand
in relation to its purchasing power and, on the other hand, in the management of the
balance of trade imbalances, the balance of payments at the level of countries, as a result
of the discrepancy between export and import.
The advantage of globalization lies in the mobility of the factors of production in order to
cover the demand of economic goods in those economies where endowment with factors
of production is insufficient, in which case the import is preferable. Modern economies
are global economies the relative position of which is determined by the market
competitiveness and efficiency ratios. We cannot exclude imports, but they are done when
the relative cost of a unit production on the domestic market is the relatively higher cost
of product on the market, or when the demand for factors of production may not be
covered by the internal market. What is inefficient and un equivalent is the increase in
imports within those countries which own factors of production but the level of
production capacity does not cover partly the demand level, both quantitatively and
qualitatively. What would be the reasons?
Firstly, it is the lack of orientation of economic operators in relation to the size and
structure of the application, the ability to absorb it. The stimulation of the production
should be correlated with the rate of the increase of consumers’ real incomes, but also
with the presumptive increase in relation with the presumptive loan limit of the banking
and nonbanking systems. Financial-banking system can maintain artificially the demand
increase by adjusting the income, which would boost the domestic production capacity or
the import of goods. Secondly, it is the low dynamics of labour productivity, either as a
result of the lack of adequate production facilities, the correlation of the investment plan
with the structure of the offer or as a result of the combination of inefficient production
factors, which have the effect of an uncompetitive production externally, in other words
the failure of meeting the internal and external demand.
Also, there are certain restrictions on domestic production capacity of intensifying, such
as political factors (conflict of interests with regard to facilitating the import of certain
goods or economic factors; excessive bureaucracy), social factors (existence of a
differentiated social structure that requires supporting disadvantaged social classes, which
requires increasing public spending gains, i.e. adjusting fiscal policy), the degree of
involvement of civil society in changing people's mentality as regards the act of
production and consumption addressing productive sphere as an opportunity and not as a
priority in the act of consumption.
In this context, the economy thank s to its actors, looks for solutions to optimize the ratio
between needs and possibilities in line with the natural environment, taking into account
the quantitative restrictions of the monetary and financial nature, such as the boomerang
effect caused by the excesses of any kind.

311
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

At the same time, in the context of conventional resources shortages and irrational use of
resources that together have generated waste, pauperism, it is necessary to create
mechanisms through which to identify with the laws of nature that govern it, to use
natural resources in sustainable manner, to devise strategies through which utilisation of
free goods to give balance and limits the production process mainly owing to natural
grade, absorption of sustainable products in the environment. This natural way of dealing
with the economy is identified with eco-economy.
This analysis starts with the indicators identified in the previous chapter, grouped in
accordance with 10 main themes of ecoeconomy: socio-economic development,
sustainable production and consumption, social inclusion, demographic shifts, public
health, energy and climate change, sustainable transport, natural resources, global
partnership, and good governance.
As for the socio-economic development indicator considering both quantitative aspects of
the Economic Development (investments, savings rate of the households), as well as
qualitative, in the form of research and development expenditures, eco-efficiency indexes,
energy intensity of the economy, relevant for the Innovation, competitiveness and eco-
efficiency indicator, as well as the aspects that characterize the level of employment at the
country level, including measures aimed at integrating young people who don't have a job
are not present in forms of education (Young people neither in employment nor in
education or training (NEET)), or the unitary cost of the nominal labour.
In our analysis we have submitted GDP per capita as an indicator to characterize the
developmental level, being an indicator of falling and in the HDI (human development
index).
Table 1. Real GDP/inhabitant, Euro

Real GDP/ inhabitant, Euro


Country\year 2007 2008 2009 2010 2011 2012 2013 2014 2015
UE-28 26200 26200 25000 25400 25800 25600 25600 25900 26300
Belgium 34000 34000 32900 33500 33900 33700 33500 33800 34100
Bulgaria 4900 5300 5100 5100 5200 5300 5400 5500 5700
Czech 15200 15400 14600 14900 15200 15000 15000 15200 :
Republic
Denmark 46200 45600 43000 43500 43900 43700 43400 43700 43900
Germany 32100 32500 30800 32100 33300 33400 33400 33800 34100
Estonia 13300 12600 10800 11000 11900 12600 12800 13200 13400
Ireland 40700 39000 36500 36400 37200 37200 37600 39500 42300
Greece 22700 22600 21500 20300 18500 17200 16800 17000 17000
Spain 24500 24400 23300 23200 22900 22300 22000 22400 23100
France 31500 31400 30300 30800 31200 31200 31200 31100 :
Croatia 11200 11500 10600 10500 10500 10300 10200 10200 10400
Italy 28700 28200 26500 26800 26900 26000 25400 25300 25500

312
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Cyprus 24200 24500 23300 23000 22600 21700 20400 20100 20600
Latvia 10200 9900 8600 8500 9200 9700 10000 10400 10800
Lithuania 9800 10100 8700 9000 9800 10300 10800 11200 11500
Luxembourg 82900 80800 75100 77900 78100 75600 76900 78200 80500
Hungary 10300 10400 9700 9800 10000 9900 10100 10500 10900
Malta 15500 16000 15500 15900 16200 16500 17000 17500 18400
The 38900 39400 37700 38000 38500 37900 37600 37900 38500
Netherland
Austria 35700 36100 34700 35200 36100 36200 36100 36000 36000
Poland 8600 8900 9100 9400 9900 10000 10100 10500 10900
Portugal 17200 17200 16700 17000 16700 16100 16000 16300 16600
Romania 6100 6700 6300 6300 6400 6400 6700 6900 7200
Slovenia 18600 19200 17500 17700 17800 17300 17100 17600 18000
Slovakia 11900 12600 11800 12400 12800 13000 13200 13500 14000
Finland 37200 37300 34000 34900 35600 34900 34500 34100 34200
Sweden 40400 39800 37400 39400 40100 39700 39800 40300 41600
Great 30500 30100 28700 28900 29200 29400 29800 30400 30900
Britain
Source: processed after www.eurostat.org

Source: author’s contribution after processing Eurostat data

Graph 1. Evolution of GDP per inhabitant in Romania comparable to


UE-28, period 2007-2015

313
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The growth of GDP per capita in the analyzed period was due to an investment
contribution, as a result of the translation of Romania's economy from the transition
economy to the emerging economy. At the same time, budget allocations to areas such as
education, health, sustaining infrastructure development policies have led to increased
business-to-business expectations, entrepreneurship, with a direct effect on living
standards. The need for training, through adaptability of the educational system to the
requirements of the labor market, will decisively contribute to increasing education and
interfering with the quality of life.
Table 2. Human development index, 2015
Countary Number of Gross
Life The main
school years domestic
HDI expectancy promoted
estimated as income per
at birth school years
promoted capita
Value years years years Euro
2014 2014 2014 2014 2014
Denmark 0.923 80.2 18.7 12.7 44,025
The Netherland 0.922 81.6 17.9 11.9 45,435
Germany 0.916 80.9 16.5 13.1 43,919
Ireland 0.919 80.9 18.6 12.2 39,568
Sweden 0.907 82.2 15.8 12.1 45,636
Great Britain 0.907 80.7 16.2 13.1 39,267
Luxemburg 0.892 81.8 13.9 11.7 58,711
Belgium 0.89 80.8 16.3 11.3 41,187
France 0.888 82.2 16 11.1 38,056
Austria 0.885 81.4 15.7 10.8 43,869
Finland 0.883 80.8 17.1 10.3 38,695
Slovenia 0.88 80.4 16.8 11.9 27,852
Spain 0.883 82.6 17.3 9.6 32,045
Italy 0.873 83.1 16 10.1 33,030
Czech Republic 0.87 78.6 16.4 12.3 26,660
Greece 0.865 80.9 17.6 10.3 24,524
Estonia 0.861 76.8 16.5 12.5 25,214
Cypruss 0.85 80.2 14 11.6 28,633
Slovakia 0.844 76.3 15.1 12.2 25,845
Poland 0.843 77.4 15.5 11.8 23,177
Lithuania 0.839 73.3 16.4 12.4 24,500
Malta 0.839 80.6 14.4 10.3 27,903
Portugal 0.83 80.9 16.3 8.2 25,757

314
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Hungary 0.828 75.2 15.4 11.6 22,916


Letonia 0.819 74.2 15.2 11.5 22,281
Croatia 0.818 76.2 14.8 11 19,409
Romania 0.793 74.7 14.2 10.8 18,108
Bulgaria 0.782 74.2 14.4 10.6 15,596
Source: data processed from www.eurostat.org

The importance of sustainability lies in the marginal benefit in human development,


whereby incomes through the redistribution process, contributes to facilitating human
development. Through human development actually it is understood the extent to which
the individual reaches a certain standard of living by identifying its subjective,
objective, and factual needs, generated by the economic, social, political, and
cultural awareness of the ways of satisfying their reporting to existing and potential
resources. Individuals, engaged in a workable economic system, develop their capacities
and powers to concure the income required to satisfy societal needs. It should be noted
that the scope of the needs exceeds the sphere of material needs, the tendency being to
cover the needs of security, justice, governance (participation in community life,
involvement in decision-making processes), education, culture, arts, and multiculturalism.

Source: author’s contribution after processing Eurostat data

Graph 2. Graphic representation of HDI, 2015

Ecoeconomy transforms the benefits of ecology and bio economics into economic policies
which gives sense and rationality in economic activity, both at the level of consumption,
as the defining act that supports a market economy, as well as at the level of the
allocation, as a way to reduce societal inequalities.

315
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The problems mankind is facing, from those related to the irrational use of natural
resources, reaching their limits and generating increasing greenhouse gases, global
warming, the intensification of natural disasters, to those generating economic crisis,
prolonged recession, unemployment, structural deficits with repercussions on the quality
of life, make necessary a rethinking of the economic system on the basis of rational,
ethical, ecological. Naturally, ecoeconomy becomes an integrative concept which can
manage unitarily environmental, social, economic, or ethical issues.
The allocation issue is obvious, especially for the fact that without effective allocation,
production might lie more than the marginal cost, which would mean the waste of
resources and energy etc. We believe that the allocation can be integrated into paradigms
of development/growth and completely different from the traditional approach. The issue
of allocation lies in the size of the scale and intensity of the increase, which takes perverse
effects on a finite ecosystem, as the Earth's ecosystem, which cannot support a continuous
growth of savings through the introduction of new and new needs.
Basically ecoeconomy is a complex process, integrator, generator of wealth that sustains
not only to meet the vital needs of the people but also incorporating, in the measures of
improving the standard of living and the quality of life of those aspects pertaining to non-
commensurable individual freedoms, safety, honesty, morality, equality of opportunity,
respect, and honour. Hence, thanks to ecoeconomy there is a particular attention for
emphasizing the human dimension of the development policies as well as the qualitative
approach of the economic growth policies on ensuring the sustainability of development
and strengthening the links of causality between economic growth, human development
and the natural environment.
The concept development signifies a fundamental feature of life: living beings develop
throughout life, which means that they evolve. The development is a subsystem of the
system of life (Capra Fritjof, 2005); evolutions within this system refer to the new forms
of spontaneous order, which confers dynamism, evolution, and creativity. The
development is also a living system, with its own internal structure, being in a permanent
form of evolution. For this reason, a series of events with local specificities and different
intensities of the concept of development can be identified.

CONCLUSION

The presentation of the two indicators revealsthat Romania has to advance in terms of
development, both through investments to support sustainable development, and through
education and training. Ecoeconomy constitutes a chance for Romania for the purposes of
the ambitions of the Europe 2020 strategy, by creating a sustainable and inclusive
economy, competitive EU economies in relation to the fact that natural resources can be
used eco-efficiently, sustainable jobs can be created through the rational use of land, the
use of cutting-edge technologies in the creation of products, promoting them and opening
new markets. The indicator that highlights sustainable economy benefits, focused on
valuing natural resources according to sustainable principles, is the index of eco-
innovation.
Romania is found to have recorded lower values of the indicator comparable to the
developed countries of the EU, which shows the progress concerning the incorporation of

316
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

innovation and research into the use of resources, labour and capital from the perspective
of the production in order to cover needs in accordance with the sustainable principles.

REFERENCES

[1] Ayres, Robert and Jeroen vand den Bergh, and John Gowdy (2000), “Viewpoint:
Weak versus Strong Sustainability”,
http://www.tinbergen.nl/discussionpapers/98103.pdf;
[2] Capra Fritjof, (2005), Development and Sustainability, www.ecoliteracy.org
[3] Chichilnisky, G., (1998), “Sustainable development and North-South trade”,
Published in: Protection of Global Biodiversity (0198): pp. 101-117,
http://mpra.ub.uni-muenchen.de/8894/;
[4] Chichilnisky, Graciela (1995), “The economic value of the Earth’s resources”,
MPRA Paper No.8491;
[5] Commoner, Barry (1980), Cercul care se închide, Politică Printing House;
[6] Daly, Herman E. (1997), “Georgescu Roegen versus Solow/Stiglitz”, Ecological
Economics 22;
[7] Danciu A.R., Niculescu Aron I.G., Gruiescu M., (2007) Statistics and econometrics,
„Enciclopedica” Publisher, Bucharest;
[8] Dietz, Simon and Eric Neumayer (2006), “Weak and Strong Sustainbility in the
SEEA: Concepts and Measurement”, Ecological Economics 61 (4),
http://eprints.lse.ac.uk/3058/1/Weak_and_strong_sustainability_in_the_SEEA_(LS
ERO).pdf;
[9] Goodland, R. (1996). The Concept of Environmental Sustainability. Annual Review
of Ecology and Systematics, Vol. 26;
[10] Gowdy, J. and Mesner, S. (1998). The Evolution of Georgescu-Roegen’s
Bioeconomics. Review of Social Economy, Vol.LVI, No.2,
http://are.berkeley.edu/courses/ARE298/Readings/goodland.pdf
http://homepages.rpi.edu/~gowdyj/mypapers/RSE1998.pdf;
[11] Pearce, David and Giles Atkinson (1998), “The concept of sustainable
development: An evaluation of its usfulness ten years after Brundtland”, Swiss
Journal of Economics and Statistics, Vol.134 (3);
[12] http://ec.europa.eu/eurostat/tgm/web/table/description.jsp, code tsdnr100.

317
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

UNIVERSE SIMULATORS

Stefan Prajica 1*
Costin-Anton Boiangiu 2

ABSTRACT

Whether they seek to verify theories of the origin and evolution of the large-scale
structure of the Universe, understand more about its past or make predictions for its
future, scientists rely on supercomputers to model and create cosmological simulations.
In the areas of physics and astrophysics, with nowadays computational resources it is
feasible to simulate complex physical systems with an accuracy which is useful. The
article presents two different approaches for simulating universes – the hydrodynamic
cosmological simulations and N-body cosmological simulations and their corresponding
state-of-the-art implementations.

KEYWORDS: astrophysics, cosmology, smoothed-particle hydrodynamics, N-body simulation

1. INTRODUCTION

Computational astrophysics involves the use of computers and numerical methods for
solving problems identified by astrophysics researches, when the mathematical models,
which describe astrophysical systems, are impossible to be calculated analytically [1][2].
Even if the results generated by such methods do not represent the exact solution, these
approximations are far more valuable than precise solutions of approximate equations that
can be determined analytically.
Notable results by computation in astrophysics were obtained in the following areas of activity:
• Stellar structure and evolution
• Radiation transfer and stellar atmospheres
• Astrophysical fluid dynamics
• Planetary, stellar, and galactic dynamics
Since about 95% of the Universe consists of “darkness” – 72% dark energy and 23% dark
matter, solving the mystery of the dark energy’s nature can only be achieved through
indirect observation [3]. Future space surveys will capture the light of billions of galaxies
and astronomers will evaluate the subtle distortions, caused by light, deflected of these
background galaxies by a foreground but invisible distribution of mass – the dark matter.

1*
corresponding author, Engineer, ”Politehnica” University of Bucharest, 060042 Bucharest, Romania,
Stefan.Prajica@tangoe.com
2
Professor PhD Eng., ”Politehnica” University of Bucharest, 060042 Bucharest, Romania,
costin.boiangiu@cs.pub.ro

318
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The primary tools for researching cosmic structure formation are simulations [4], which
astrophysicists use to study how matter clusters in the Universe by gravitational
aggregations. Due to advancements, simulations now include the visible, baryonic matter
as well as non-baryonic cold dark matter.
The process of a cosmological simulation is comprised of two parts [5]: the first part
requires the generation of initial conditions as stated by the structure formation model to
be investigated – these conditions will be used as input. In the following step, algorithms
simulate the evolution of particles by tracking their trajectories under their mutual gravity.
Put it simply, cosmological simulations are just tools used to investigate how millions of
particles evolve, since the particles try to sample the matter density field as precisely as
possible. In the end, cosmologists study the output of these simulations to check if it
matches the data gathered by space surveys, and to judge and understand any found
discrepancies. The Figure 1 sketches the evolution of the Universe and the gap between
observations gathered from the Cosmic Microwave Background (which describe the early
Universe), and current observations, which these simulations are trying to cover.

Figure 1. Cosmic Timeline [5]

319
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

During the past decade, very accurate Cosmic Microwave Background (CMB)
experiments such as WMAP [6] and Planck [7] brought us to the era of high precision
cosmology.

WMAP

NASA’s Explorer mission which launched in June 20101 - Wilkinson Microwave


Anisotropy Probe, made fundamental measurements of our Universe, and after WMAP’s
data stream ended, the analysis of the data collected by it revealed that the mission was in
fact, unexpectedly successful. A few of the achievements stated by NASA on WMAP’s
website include [6]:
• determining that baryons make up only 4.6% of the universe
• narrowed down the curvature of space to < 0.4% of “flat” Euclidean
• detected that the amplitude of variations in the density of the Universe is slightly
larger on big scales, compared to smaller scales – among other results, this
supports the idea of “inflation” and that tiny fluctuations were generated during
the expansion and eventually grew to form galaxies
• confirmed predictions of the inflation idea, regarding that the distribution of such
fluctuations follows a bell curve with the same properties across the sky, and that
the number of hot spots and cold ones on the map is equal

2. RESEARCH STANDARD AND APPROACHES

The standard Lambda Cold Dark Matter (LCDM) model [8], managed to constrain down
to a few percent cosmological parameters such as the Hubble constant or the total amount
of matter contained by the Universe. For the following decade, future cosmological
experiments might potentially change modern physics by clarifying two of the most
difficult to find components of the Universe – dark energy and dark matter. This is
particularly necessary due to the fact that two of the methods considered to quantify the
clustering of matter as a function of time and scale, galaxy clustering and weak lensing
rely on predictions of the non-linear dynamics of dark matter as accurate as possible.
As approaches to simulate the universe, two of the most common and successful methods
are:
1) Hydrodynamic cosmological simulations – smoothed-particle hydrodynamics
(SPH) which works by dividing the fluid into particles, which are in fact a set of
discrete elements [9]. A kernel function “smoothes” their properties over a special
distance - this means that by summing the properties (which are relevant) of all of
the “grouped” particles outputs the physical quantity of any particle.
2) N-body simulations – the simulation of a dynamical system of particles,
influenced by physical forces (e.g. gravity) [10]. For processes of non-linear
structure formation, like galaxy halos and galaxy filaments influenced by dark
matter, N-body simulations can be used for research.

320
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2.1. Bolshoi

Bolshoi [11], and the following work, BigBolshoi [12] (which was a simulation 64 times
greater than the initial one), are two of the most accurate cosmological simulations ever
made. Ran during 2010-2011 on the NASA’s Pleiades supercomputer, the simulation
generated the distribution of over 8.6 billion particles of dark matter, across a three-
dimensional cube-shaped space of around 1 billion light years on each side. Since the
quantum mass is this large it is pointless to attempt to distinguish between dark and
baryonic matter. The simulated particles are of dark matter, and the first evolved
structures are the dark matter halos which contain the galaxies.
The simulation started about 23 million years after the Big Bang and the initial conditions
were generated using the CAMB tools, provided by Wilkinson Microwave Anisotropy
Probe website. Because the outputted volume represents an arbitrary portion of the
Universe, the comparing its content against observations must be done from a statistical
point of view.
Initially, all of the particles were close to being uniformly distributed across the cube.
This was the overall setting of the universe after inflation and the first emission of the
cosmic background radiation.

Figure 2. The distribution of particles across the cube, during different evolution stages

The algorithm behind the Bolshoi simulation was an alteration of the one created by
Andrey V. Kratsov from the University of Chicago. The first step is to divide the cubical
simulation into a grid of smaller cube-shaped cells. Iterative splitting continue until the
number of particles contained by a cell drops under a pre-determined threshold. The
smallest cell has roughly 4,000 light years for each side, and the mesh is made up of about
16.8 million cells. The simulation used 13,824 processor cores and 13 TB of memory out
of NASA’s supercomputer based at the Ames Research Center, Pleiades, which was
ranked 7th worldwide at the time (end of 2011).
The Bolshoi cosmological simulation, as opposed to the Millenium Run which used
cosmological parameters from WMAP1 that became obsolete, used, as parameters, data
from WMAP5 that was consistent with WMAP7. Ever since the invention of Cold Dark
Matter (1984) and the first CDM N-body cosmological simulations which were essential
for determining the characteristics of dark matter, these simulations were at the core of
calculating predictions on scale where structure has formed.
These large cosmological simulations are now the basis for actual research of the structure
and evolution of the Universe, and the clusters of galaxies within. Further studying the
generated data could point out the presence and location of satellite galaxies.

321
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2.2. PKDGRAV3

Astrophysics researchers from the University of Zurich developed, over three years of
work, code designed to maximize the use of modern supercomputing architectures such as
the Swiss National Computing Center’s “Piz Daint”. In the paper [13] recently published
by Joachim Stadel, Douglas Potter and Romain Teyssier, the astrophysicists report that in
only 80 hours of wall-clock time, the code titled PKDGRAV3. The algorithm generated
the successful evolution of over 2 trillion particles, while using more than 4000 GPU-
accelerated nodes – from which over 25 billion virtual galaxies were extracted. The same
model set another standard in computational astrophysics by simulation 8 trillion particles
while running on “Titan” at the Oak Ridge Leadership Computing Facility (OLCF) and
exploiting roughly 18,000 GPU-accelerated nodes.
The chosen approach for this particular simulation was to use N-body simulations, due to
the fact that on such enormous scales, the nature of gravity is non-linear. The dark matter
fluid was sampled in a dynamical system where all possible states of the system are
represented, by using as many macro-particles as possible. Each of these macro-particles
represents a large set of authentic, microscopic particles of dark matter, which evolve
without collision while being affected by their mutual gravitational attraction.
The core algorithm – Fast Multipole Method, is a numerical technique which reduced the
time to calculate long-range forces in N-body systems by using a multipole expansion that
allows grouping of nearby sources and treating them as one. This method was introduced
by Greengard and Rokhlin in 1987 [14]. PKDGRAV3’s version of the FMM algorithm,
managed to achieve a peak performance of 10 Pflops.
At the time PKDGRAV3 was benchmarked on Titan, the supercomputer was ranked 2nd
fastest supercomputer in the world, scoring a performance of 17.59 Pflops (measured
LINPACK performance). Titan’s configuration was the following - a Cray XE7 system
with 18,688 compute nodes and a Gemini 3-D Torus network, using the AMD Opteron
model 6274 with 2.2 GHz clock speed. It also had the largest total system memory – 584
TB (highest across the supercomputer used for simulation, which included Piz Daint and
Tödi). The time-to-solution for generating 8 × 1012 particles on this setup was 67 hours.
The Table 1 describes the detailed benchmark and scaling results for Titan.
Table 1. PKDGRAV3’s performance on Titan

Nodes Np Mpc TFlops Time/ Particle


9
2 1.0 × 10 250 1.2 125 ns
9
17 8.0 × 10 500 10.3 14.7 ns
10
136 6.4 × 10 1,000 82.2 1.84 ns
11
266 1.3 × 10 1,250 152.5 1.00 ns
12
2,125 1.0 × 10 2,500 1,230.3 0.124 ns
12
7,172 3.4 × 10 3,750 4,130.9 0.0365 ns
12
11,390 5.4 × 10 4,375 6,339.2 0.0236 ns
18,000 8.0 × 1012 5,000 10,096.2 0.0150 ns

322
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The purpose of the simulation was to model galaxies as little as 1/ 10 of the Milky Way,
inside a volume as large the observable Universe. Based on the output of the run, the
overall observational strategy was optimized, as well as several adjustments for the
experiments, such as minimizing the sources of error, that will take place on the Euclid
satellite were made. Its mission to research the nature of dark energy and matter by
collecting data will begin in 2020, after the satellite will be launched in space, and last for
six years.

Figure 3. A one billion light years across section of the virtual universe, which displays the
distribution of dark matter (Joachim Stadel, UZH)

The authors mention that in order to further improve the code, co-designing is required,
which involves treating algorithmic and computer hardware developments as one design
process. They also claim that in the future, simulations will be required to pull
fundamental physical parameters from data which will be gathered by surveying, as
opposed to using them only for making predictions or understanding effects. Due to the
fact that the time-to-solution will continue to decrease as computational speed increases,
they also expect similar simulations to run in less than 8 hours within the following
decade. Given this, instead of storing raw data and post-process it, tools that analyze data
might be attached to the code directly and just change the “instrument” every run.

2.3. Illustris

The Illustris model [15] – created by researchers from several institutions including MIT
and Harvard, represents a cube-shaped piece of the universe that is 350 million light-years
long on each side and it produces features as small as 1,000 light years. The initial
conditions of the model are the parameters of the Universe 12 million years after the Big
Bang, and the following 13.8 billion years are simulated by Illustris. The project’s set of
large-scale simulations tracks the growth of the Universe, the gravitational pull which

323
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

matter applies towards itself, the motion of cosmic gas and the emergence of black holes
and stars.
The simulation tracks the evolution of more than 12 billion resolution elements in a
volume of (106.5 Mpc) 3 which contains 41, 416 galaxies and generated a new degree of
fidelity for certain observed features of the universe. Such features include the distribution
of the different galaxy shapes and preponderance of specific elements in the Universe.
Establishing a link between the distribution of galaxies made up of normal matter and the
observations was achieved by directly accounting for the baryonic component (gas, stars,
supermassive black holes, etc.) as well as gravity. Ilustris’s set of modeled physical
processes is comprised, among others, by galactic winds driven by star-formation and
thermal energy injection of black holes. These aspects contribute to achieve a simulated
Universe having its modeled galaxies realistically distributed.
In the past, hydrodynamic cosmological simulations were used to study specific problems,
due to being very expensive computationally speaking. For the past years, as a result of
hardware development, these simulations managed to get enhanced either by increasing
volume size and element count, improving the complexity and physical fidelity or by
evolving the numerical methods used. Figure 4 describes this evolution over the past
decade.

Figure 4. Evolution of hydrodynamical cosmological simulations [15]

Illustris was run across multiple supercomputers such as CURIE at the Alternative
Energies and Atomic Energy Commission in France and at the Leibniz Computing Center
in Germany, on the SuperMUC computer, but the list includes Harvard’s Odyssey, Texas
Advanced Computing Center’s Stampede and Ranger and also Oak Rridge National
Laboratory’s Kraken. The largest run took 19 million CPU hours over 8,192 cores.
AREPO [16], the code behind the simulation, uses an unstructured Voronoi tessellation of
the simulation volume, where the mesh-generating points of this tessellation are moved

324
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

with the gas flow. The adaptive mesh is used to solve the equations of ideal
hydrodynamics with a finite volume approach using a second-order unsplit Godunov
scheme with an exact Riemann solver.
The authors mention several achievements obtained when comparing the output of the
simulation against observations:
• successfully reproducing a wide range of observable properties of galaxies and
the relationships between these properties.
• precisely measuring the gas content of the universe, and where it resides
• investigating the number of "satellite" galaxies, their properties, and their
connection to cosmology
• study changes in internal structure as galaxy populations evolve in time
• the impact of gas on the structure of dark matter
• “mock” observations

Figure 5. Mock images of the simulated galaxy population [16]

325
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

3. ALTERNATIVE APPROACHES

Simulations such as Bolshoi, PKDGRAV3 and Illustris, make a solid statement regarding
the “bright future” of computational astrophysics. As projects similar to these 3 will
“inevitably” scale up along with hardware developments, and produce larger volumes of
higher quality data, another player might join the scene and contribute to explaining the
mysteries of dark matter and dark energy.
In their latest work [17], several experimental physicists working for the Institute for
Quantum Optics and Quantum Information of the Austrian Academy of Sciences built a
quantum computer with four qubits, controlled by laser pulses. The qubits are in fact
calcium ions, trapped electromagnetically.
The quantum computer was used to simulate virtual particles in vacuum, showing that
quantum computers can simulate the way particles may behave at extremely high energy
levels, hard to generate on Earth. Even though a quantum computer of such small scale
might be enough to tackle problems which are otherwise impossible using classical
approaches, this particular problem researched could be computed by classical computers
as well.
Also, researcher and co-author Peter Zoller mentions that - "We cannot replace the
experiments that are done with particle colliders. However, by developing quantum
simulators, we may be able to understand these experiments better one day." [18]
In the future, quantum simulations might aid researchers, for example, to mimic the
dynamics inside neutron stars.

REFERENCES

[1] Computational astrophysics - http:// www.scholarpedia.org/ article/


Computational_astrophysics
[2] Computational Astrophysics and Cosmology - Simulations, Data Analysis and
Algorithms - https:// comp-astrophys-cosmol.springeropen.com/
[3] The largest virtual Universe ever simulated - https:// www.sciencedaily.com/
releases/ 2017/ 06/ 170609102251.htm
[4] Physical Cosmology - https:// en.wikipedia.org/ wiki/ Physical_cosmology
[5] “How to Simulate the Universe in a Computer” - Alexander Knebe, Centre for
Astrophysics and Supercomputing, Swinburne University of Technology
[6] Wilkinson Microwave Anisotropy Probe - https:// map.gsfc.nasa.gov/
[7] Plank (ESA) - http:// www.esa.int/ Our_Activities/ Space_Science/ Planck
[8] Lambda-CDM model - https:// en.wikipedia.org/ wiki/ Lambda-CDM_model
[9] Smoothed-particle hydrodynamics - https:// en.wikipedia.org/ wiki/ Smoothed-
particle_hydrodynamics
[10] N-body simulation - https:// en.wikipedia.org/ wiki/ N-body_simulation

326
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[11] “Dark matter halos in the standard cosmological model: results from the Bolshoi
simulation” A. Kyplin, S. Trujillo-Gomez, J. Primack - https:// arxiv.org/ pdf/
1002.3660.pdf
[12] “The MultiDark Database: Release of the Bolshoi and MultiDark Cosmological
Simulations” - K. Riebe, A.M. Partl, H. Enke, J. Forero-Romero, S. Gottlober, A.
Klypin, G. Lemson, F. Prada, J. R. Primack, M. Steinmetz, V. Turchanino - https://
arxiv.org/ pdf/ 1109.0003.pdf
[13] “PKDGRAV3: beyond trillion particle cosmological simulations for the next era of
galaxy surveys“ - Potter, D, Stadel, J, Teyssier, R
[14] “A Fast Algorithm for Particle Simulations” L. Greengard, V. Rokhlin, 1987
[15] “Properties of galaxies reproduced by a hydrodynamic simulation” M.
Vogelsberger, S. Genel, V. Springel, P. Torrey, D. Sijacki, D. Xu, G. Snyder, S. Bird,
D. Nelson, L. Hernquist
[16] “Real-time dynamics of lattice gauge theories with a few-qubit quantum computer”
- Nature 534, 516–519 (23 June 2016)
[17] https:// www.livescience.com/ 55196-quantum-computers-simulate-beginning-of-
universe.html
[18] The Illustris project, http:// www.illustris-project.org

327
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

FEATURES OF SMART LEARNING

Mariana Coanca 1*

ABSTRACT

The first section of our paper deals with an insight into pedagogy and how technology can
contribute to successful teaching and learning. The paper points out the fact that both
teachers and learners have realized the importance of communicational competence in
exchanging information, knowledge, experiences, values, attitudes, etc. for a more
efficient use of m-learning opportunities to enhance language skills. Therefore, in the next
sections we focus on the presentation of the most popular mobile apps which. are
essential for a transformational learning experience. Developing a strategy which relies
on the innovative and meaningful use of technology will cater for diverse student needs.

KEYWORDS: smart class, m-learning, pedagogy, foreign languages.

1. INTRODUCTION

Any teaching process has as a general purpose the exploration and optimal exploitation of
students' learning resources. One of the most important such resources is the learning
style. Learning style is the expression of a strategic learning specific to learning. Unlike
the cognitive style, which refers to the organization and control of cognitive processes, the
learning style refers to the organization and control of learning and acquiring knowledge
strategies. Closely related to the overall structure of the student's personality, his learning
style is formed (in fact, the individual preferences for the specific learning environment,
the preferred learning and study pathways, the preference for structured versus
unstructured situations for teamwork versus self-learning, for the rhythm of learning, with
pauses or sustained) (Woolfolk, 1998:128). According to Kolb, the learning style
identifies the concrete ways in which the individual reaches changes in behavior through
experience, reflection, experiment and conceptualization (Cerghit apud Kolb, 2002: 208).
More recent studies have identified three categories of learning style: style centered on
meaning, specific to the student predisposed to engage in learning tasks based on intrinsic
motivation (curiosity, pleasure); reproductive style, which describes the student who is
predisposed to engage in learning tasks and makes special efforts for fear of failure
(extrinsic motivation); a style focused on acquisitions, which involves extrinsic
motivation which is linked to hope for success (this student will make special efforts to
acquire new and new knowledge in the hope of the rewards it brings). Other authors
prefer the phrase “approach” instead of learning task. Research on learning style has
attempted to identify not only the variables that define the learning style but also their

1*
corresponding author, Lecturer, Ph.D., Romanian-American University, Bucharest, Romania,
coanca.mariana@profesor.rau.ro

328
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

influence on learning, as well as those variables that can be controlled, optimized by


training.
Kolb’s Inventory (1981) (Kolb Learning Styles Delineator) analyzes students’ preferences
for: Concrete Experience (Learning by Direct Involvement); Abstract conceptualization
(learning by building concepts and theories for describing, explaining and understanding
their own observations); Reflective observation (learning by observing others or by
reflecting on one’s own experiences or others); Active experimentation (learning by using
available theories and concepts to solve problems and make decisions).
Heppell (1993) pinpointed three learning stages that are noticeable in student’s use of
technology: a narrative stage characterized by observing and listening to things on the
technology; an interactive stage where there are opportunities to explore; and a
participative stage in which the learner is able to create new media as a result of the
investigations.
In the twenty-first century the textbook is no longer the norm. It has become incidental
and took on different forms, such as wireless technology, hand-held devices, m-learning
and e-mail dialogue. The technological knowledge and skills will be essential components
in training programs for teaching staff. These are part of the lifelong learning paradigm
which was driven by the rapid social, technological and economic changes that have
determined people to prepare for second or third careers and to keep themselves updated
on new developments that affect their personal and social goals (Ornstein & Levine,
2008: 438).

2. M-LEARNING UPTAKE

The following mandatory aspects for a responsible educational attitude towards all the
actors involved are highlighted in many studies (see Bradea, 2009; Marimescu, 2009):
- Selecting didactic methods, procedures, and specialized content according to the
learners’ needs and the dynamics of their knowledge;
- Organizing professional situations/contexts in which the interaction of learning
prevails, to place the learner at the center of the learning process, stimulating receptivity,
productivity and creativity in the target language;
- Stimulating, maintaining and capitalizing on motivational tensions in activities which
target text and the context of specialization;
- Enhancing the foreign language teacher-student relationship, in the sense of mutually
beneficial cooperation for the high efficiency of the instructive-educational process;
- Adjusting the syllabi to meet the needs of learners and introduce digital pedagogy;
- Modernizing the study conditions (specialized content manuals, multi-media
laboratories, etc.) that facilitate the activities and operations for the amplification of
general and communicative skills in favor of the specialist language in foreign language;
- Indicating the need for a possible (optional) orientation in psycho-pedagogical training
and methodologies of future modern language teachers, in line with the training
requirements at university level and socio-cognitive maturity of educators;

329
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

- Signaling the need for a (optional) orientation in psycho-pedagogical training and


methodology of future modern language teachers, in accordance with the training
requirements at university level and socio-cognitive maturity of educators;
- Developing programs of continuous training for teachers of foreign language
(seminars, conferences, psycho-pedagogical and applied didactics studies, etc.);
The active participatory methods include all those methods that trigger an active learning
state, a learning that is based on one's own activity. These are the methods that lead to the
active forms of learning, explorative learning, problem-solving, learning by doing,
creative learning; these are methods that train students to carry out independent study,
work with books, research, practical things, creative exercises, etc. Active learning
engages productive-creative capacities, thinking and imagination, appeals to the mental
and cognitive structures that the student has and which is used in producing the new
learning. Modern education is based on an action-oriented methodology. From this point
of view, active-participatory methods are based on the idea of the operational
constructivism of learning (Cerghit, 2006).
Self-education can be considered “a conscious, systematic activity, oriented towards a
goal that each individual deliberately proposes and for which it is necessary to make a
personal effort” (Marinescu apud Barna, 2009). The consciousness is imprinted by the
fact that self-education is a process that is done voluntarily when the person has reached a
degree of maturity that allows him to realize the importance of self-education in his
personal and professional development. Promoting self-education can be a formative
objective, which can be designed and pursued in educational approaches (effect) and at
the same time, a premise (because of) an education of genuine quality. The main stages of
self-education (Marinescu apud Barna, 2009; Marinescu apud Blândul, 2005) are the
following: understanding the need for change as self-education starts from a reality that
needs to be changed, the person being aware of this need; existence of the desire for
change, in other words the intrinsic motivation of the desire for change is an essential
condition in the realization of self-education; self-analysis of own resources and
possibilities when there is a need for self-knowledge for the identification of the strong
and vulnerable points, respectively for capitalizing the bonuses and correcting the latter;
establishment of the proposed objectives – which are realistic objectives and in
accordance with the person's possibilities of identification; proceed in line with the
proposed objectives, which implies a voluntary effort to overcome external and internal
barriers; evaluation of the results of that action, which is in line with the initial objectives
and the requirements in which they were achieved.
M-learning is a new paradigm that creates a new educational environment where learners
have access to course syllables, instructions and applications, anytime, anywhere. It is a
learning situation that integrates mobile connection tools that create the premise of an area
of spread of messages, practically global, at the planetary scale. M-learning is different
from traditional training in the sense that all components of traditional learning change
functionality in mobile learning.
Both teachers and learners, for an efficient use of m-learning opportunities, should be
aware of the importance of communicational competence in exchanging information,
knowledge, experiences, values, attitudes, etc. (Marinescu, 2009: 85-87).

330
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Several novelties induced by m-learning compared to e-learning are outlined below


(Marinescu apud Sharma & Kitchens, 2004):
- Instruction based on several audio, visual and animated sequences;
- Learning takes place in mobile locations, spread anywhere;
- Notification of the arrival of an email is done instantly;
- Communication is direct, spontaneous, synchronous;
- Flexibility;
- Audio and video conferencing;
- 24 hours / 7 days, instant connections;
- Unlimited connection space;
- Connection is done without waste of time;
- Group is made up of virtual connectivity;
- Rich communication, without inhibitions, with subjective inflexions;
- Placement and testing is in any place;
- Time interval is variable, as much as each student needs;
- Individualized tests;
- Rich feedback;
- Immediate feedback;
- Flexibility in terms of difficulty and number of problems to solve;
- Tests are based on text, but they are also based on audio and video interventions
- Marking and notifying the results is done electronically;
- Examination is done when the learner is available;
- Interlocutor time is used to support and individualize training;

3. POPULAR TOOLS

a. Lingoes

It is a dictionary and multi-language translation software providing results in over 80


languages. It offers full text translation, capture word on screen, translate selected text and
pronouncing text and abundant free dictionaries as a new generation dictionary and
translation software. Lingoes offers users the instantest way to look up dictionaries and
translation among English, French, German, Spanish, Italian, Russian, Chinese, Japanese,
Korean, Swedish, Thai, Turkish, Vietnamese, Greek, Polish, Arabic, Hebrew and more
over 80 languages. It is one of the best tools for learning all kinds of languages.
(http://www.lingoes.net/en/translator/index.html)
With the creative cursor translator, Lingoes automatically recognizes the word and its
definition as soon as one moves the cursor and point to any text then press the key. It
owns full features of current popular commercial software, and creatively develop cross
language design and open dictionaries management. We noticed that a plenty of
dictionaries and thesauruses are listed for free download.
(http://www.lingoes.net/en/translator/index.html)

331
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. Lingoes interface

We have listed its features below (http://www.lingoes.net/en/translator/index.html):


- Lingoes offers text translation and dictionaries in over 60 languages in the world
and supports cross translation between different languages;
- The online translation service offered by Lingoes integrates the most advanced
text translation engines in the world, including Systran, Promt, Cross, Yahoo,
Google and Altavista, etc, which makes text translation very easy. One can freely
choose these engines for translation and compare the different results generated
by different engines to help understand the texts in languages which he/she is not
familiar with;
- One can translate words in any places of the screen by using the cursor translation
function of Lingoes. By simply pressing Shift key, the system will automatically
recognize the words selected by the cursor and display results;
- Lingoes integrates cursor translator, looking-up in dictionaries and intelligent
translation by creative “translate selected text”. Once a selection of a word or
sentence is made on screen by cursor, it will translate as many as 23 languages of
text into a native language;
- Lingoes provides the function of words and texts pronunciation based on the
newest Test to Speech (TTS) engine, which can help one quickly learn the
pronunciations of the words and is very convenient for study and memorizing;
- The open management allows an easy download and install dictionaries according
to own needs;

332
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

- There are provided thousands of dictionaries in all kinds of languages and fields
for users to free download and use;
- Without local dictionaries, one can make use of online dictionary service and get
more results;

b. Duolingo

This app has become a very popular example of mobile language learning.mainly because
it is not aimed solely at an English native speaker. Many Duolingo courses are created by
native speakers themselves which empowers communities and language passionates to get
involved and gave rise to perhaps less expected courses such as Guarani or Klingon. For
each language there are specific courses that aim at those with different first languages,
which to date produces 81 courses. (https://www.lingualift.com/blog/best-language-
learning-apps/)

Figure 2. Duolingo interface

Since launching in 2012, more than 150 million students from all over the world have
enrolled in a Duolingo course, either via the website or mobile apps for Android, iOS.
In order to establish its efficacy, Feifei Ye assesses some of the evidence for validity and
reliability of the Duolingo English Test for non-native English learners. In addition, the
Duolingo test scores were linked to TOEFL iBT scores to establish concordance. Scores
from the Duolingo English Test were found to be substantially correlated with the TOEFL
iBT total scores, and moderately correlated with the individual TOEFL iBT section
scores, which present strong criterion-related evidence for validity. The Duolingo test
scores presented high test-retest reliability over a two-week interval. Equipercentile
linking was used to establish concordance between TOEFL scores and the Duolingo test
scores. Duolingo English Test scores are on a scale of 0–100 and TOEFL iBT total scores
are on a scale of 0–120. For international students to apply for studying in US

333
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

universities, the minimum cut-off score of TOEFL iBT is 80 and a more selective cut-off
score is 100, corresponding to scores 50 and 72 respectively on the Duolingo English Test
(Ye, 2014).
A worth-mentioning study by Settles & Meeder uses data from Duolingo to fit HLR
models, reducing error by 45% compared to several baselines at predicting student recall
rates. HLR model weights also shed light on which linguistic concepts are systematically
challenging for second language learners. Finally, HLR was able to improve Duolingo
daily student engagement by 12% in an operational user study. Introduction The spacing
effect is the observation that people tend to remember things more effectively if they use
spaced repetition practice (short study periods spread out over time) as opposed to massed
practice (i.e., “cramming”).
Another study makes several contributions to student modeling. The author presents
models of student learning that generalize several prominent existing models and
outperform them on real-world datasets from Duolingo. Second, he shows how these
models can be used to visualize student performance in a way that gives insights into how
well an intelligent tutoring system “works”, improving upon the population-level learning
curve analysis that is typically used for this purpose. Finally, by demonstrating that
relatively simple mixture models can deliver these benefits, the author expressed his hope
that further work will focus on more sophisticated approaches that use mixture models as
a building block (Streeter, 2015:1).
The Duolingo dataset consists of a collection of log data from Duolingo. Students who
use Duolingo progressed through a sequence of lessons, each of which took a few minutes
to complete and taught certain words and grammatical concepts. Within each lesson, the
student was asked to solve a sequence of self-contained challenges, of various types. For
example, a student learning Spanish may be asked to translate a Spanish sentence into
English, or to determine which of several possible translations of an English sentence into
Spanish is correct. For these experiments, the author focused on listening challenges, in
which the student listens to a recording of a sentence spoken in the language they are
learning, then types what they hear. Listen challenges are attractive because, unlike
challenges which involve translating a sentence, there is only one correct answer, which
simplifies error attribution. For these experiments the author used a simple bag-of-words
knowledge component (KC) model. There is one KC for each word in the correct answer,
and a KC is marked correct if it appears among the words the student typed. For example,
if a student learning English hears the spoken sentence “I have a business card” and types
“I have a business car”, then the author would mark the KC card as incorrect, while
marking the KCs for the other four words correct. This approach is not perfect because it
ignores word order as well as the effects of context (students may be able to infer which
word is being said from context clues, even if they cannot in general recognize the word
when spoken). However, the learning curves generated by this KC model were smooth
and monotonically decreasing, suggesting that it performs reasonably well (Streeter,
2015:48).

334
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

c. Hello Talk

An app aimed to facilitate speaking practice and eliminate the potential stress of real time
conversation. Learners can find native speakers and converse with them using a
whatsapp-like chat with voice and text messages.
Users can correct each other’s messages with an in-built correction tool, which transforms
the language exchanges into tiny tutoring sessions. The app also has an integrated
translation system to help one avoid those moments when he/she really wants to
communicate something but just lacks the one word that gives the sentence its proper
meaning. (https://www.lingualift.com/blog/best-language-learning-apps/)

d. Busuu

Busuu offers full courses in 12 languages. The app is free but to unlock most of the
features and course materials one has to invest $17 a month. The app takes the learner
through learning individual words to simple dialogues and questions about the dialogues
all of which include audio where the learner can listen to native pronunciation.
The lessons are organised in topical themes where we learn skills and expressions
connected to tasks. The special aspect of Busuu is that learners can engage native
speakers in their personal learning process. Busuu learners contribute their native
speaking skills to the platform by correcting texts created by those who study their
language. The desktop version even allows learners to chat to native speakers in real time
(https://www.lingualift.com/blog/best-language-learning-apps/)

e. Babbel

The free version comes with 40 classes, so even without investing money the app allows
students to learn a fair amount of phrases in one of the 13 languages it teaches. Each class
starts from step-by-step teaching of vocabulary with the aid of pictures. Then the words are
being used in related phrases and short dialogues adjusted to the student’s level to help quickly
build conversation skills. Handy pop-ups with the app explain most important grammatical
points related to the learned material and the desktop version includes short cultural notes.
Apart from the general beginner’s courses, Babbel also has separate packages devoted to
improving specific skills such as grammar or vocabulary.
(https://www.lingualift.com/blog/best-language-learning-apps/)

f. Memrise

The fun of Memrise lies in two things: memes and gamification. The app follows a
learning method that relies on creating funny or bizarre associations with the studied
words. Courses are often coupled with memes designed to playfully help remember the
vocabulary. The memes are created by the community and everyone can add their own.
The power of Memrise also lies in two things: spaced repetition and mnemonics. The
spaced repetition algorithm calculates when and how often one should review each word
and the app will send the learner reminders when it’s time to review.
(https://www.lingualift.com/blog/best-language-learning-apps/)

335
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

g. Leaf

Based on one’s location and phone usage the app suggests most relevant lessons. It can
predict what contexts will enhance learners’ experience and suggest contexts to suit the
needs of the moment. The lessons are short and written in simple English. Each teaches
you a specific practical skill and can be read in a couple of minutes.
(https://www.lingualift.com/blog/best-language-learning-apps/)

h. Lingua.ly

At first the app will asses the level of the learner, by asking whether he/she knows
specific words it will estimate the learners’ level and the range of learners’vocabulary. As
one learns he/she will be shown a text. Clicking on a word he/she does not know one will
see its translation, hear it pronounced aloud and have it added to a learner’s database of
words. Based on this feedback the app will be able to match future texts to one’s level
more accurately. The study is based on texts pulled from the internet, therefore learners
will never complain for the lack of material. (https://www.lingualift.com/blog/best-
language-learning-apps/)

i. TripLingo

As the name suggests the app is aimed at travellers who need to improve their language
skills before their dream holiday. The app is aimed to get one to speak and be understood
so that he/she should not feel lost in a foreign environment. A feature called the slang
slider displays different levels of formal or casualty of each phrase so one can adjust it to
the specific context he/she is in. The lessons are divided into handy sections such as
“safety phrases” or “business phrases”. TripLingo is also an emergency resource. It has an
inbuilt voice translator rendering English in the foreign language, and when one is really
at a loss of words he/she can even call a real translator.
(https://www.lingualift.com/blog/best-language-learning-apps/)

4. CONCLUSIONS

The implemented interactive technology is used below its technical and pedagogical
potential. In order to develop this potential, close collaboration is needed between
technology developers and teachers in different education segments. The main needs
identified in the learning environment in the studies conducted so far on interactive
teaching resources are: interactivity, compatibility and independence of certain
technologies, fast access, interculturality issues and knowledge of an international
language, collaborative learning, flexibility and adaptability.
Teaching methods used in foreign language acquisition are those centered on learning,
developing cooperation skills, communication and relationship; due to technological
advancements, we need to develop new strategies for the mobile age and conceptualize
learning, reconsider traditional methods and capitalize on them in a digital environment to
create diverse learning situations with regard to the acquisition of a foreign language.

336
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

REFERENCES

[1] Angelelli V. Claudia. (2015). Studies on Translation and Multilingualiasm. Public Service
Translation in Cross-Border Healthcare. Final Report for the European Commission
Directorate-General for Translation Reference: DGT/2014/TPSLuxembourg:
Publications Office of the European Union, ISBN 978-92-79-47163-6 doi: 10.2782/765472
[2] Bradea O.L. (2009). Principii şi criterii în activitatea de predare-învăţare şi evaluare
a limbilor străine moderne la nivel universitar. The Proceedings of the “European
Integration – Between Tradition and Modernity” Congress, 3, pp. 1215-1226,
Editura Universităţii Petru Maior, Tîrgu Mureş, ISSN 1844-2048.
[3] Braun S. and Taylor J. (2011): „Rezultatele AVIDICUS partea I: Perspectivele
serviciilor juridice şi a interpreţilor juridici asupra interpretării tip videoconferinţă
sau la distanţă – rezultatele a două sondaje europene”.
http://www.videoconferenceinterpreting.net/files/AVIDICUS_symposium_abstracts.pdf
[4] Settles, B., Meeder, B. (2016). A Trainable Spaced Repetition Model for Language
Learning. Proceedings of the Association of Computational Linguistics (ACL), pp.
1848-1858.
[5] Cerghit, I. (2008). Metode de învăţământ, Editura Polirom, Iaşi.
[6] Cerghit, I. (2002). Sisteme de instruire alternative şi complementare. Structuri,
stiluri şi strategii, Ed. Aramis, Bucureşti.
[7] Feifei Ye. (2014). Validity, Reliability, and Concordance of the Duolingo English
Test Technical Report, University of Pittsburgh, May 2014
englishtest.duolingo.com/resources
[8] Heppell, S. Teacher Education, Learning and the Information Generation: The
Progression and Evolution of Educational Computing Against a Background of Change.
Journal of Information Technology for Teacher Education, vol. 2, no.2, pp. 229-237.
[9] Margaritoiu A., Brezoi A. (coord. Suditu M.) Metode Interactive de Predare-
Învăţare, suport de curs.
[10] Marinescu M. (2009). Tendinţe şi orientări în didactica modernă Editura Didactică
şi Pedagogică, Bucureşti.
[11] Ornstein, A., Levine, D. (2008). Foundations of Education. Houghton Mifflin
Company, Boston.
[12] Streeter Matthew. (2015). Mixture Modeling of Individual Learning Curves. Proceedings
of the International Conference on Educational Data Mining (EDM), pp. 45-52.
[13] Woolfolk, A. E. (1998). Educational psychology. Boston Allyn & Bacon.
Online sources:
https://www.lingualift.com/blog/best-language-learning-apps/
http://www.lingoes.net/en/translator/index.html

337
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

BEHAVIOR CHARACTERISTICS OF MOBILE WEB APPLICATIONS


AUTHENTICATED USERS

Alin Zamfiroiu 1*
Carmen Rotună 2

ABSTRACT

The Internet facilitates nowadays interaction and collaboration between persons located
at very long distances from each other. It also facilitates interactions between software
systems located at great distances. The key concept is the identification of the subject
involved in the interaction. It is necessary to validate whether the person performing
certain activities on a software system is the person entitled and not an unauthorized one.
In this paper we propose a study of the actual context in user authentication domain, in
various online applications.
Therefore, the purpose of this document is to provide stakeholders (software developers)
the characteristics on users conduct within online mobile applications. Based on these
features, specific profiles can be created for each user. By using the completed profiles,
we can achieve recognition models based on user behavior.

KEYWORDS: characteristics, behavior, users, online applications, mobile applications

1. INTRODUCTION

According to [1], mobile devices are increasingly used and popular. More and more
software developers began to create applications dedicated to these devices. Thus, most
online applications have an online mobile version.
Traditional authentication model, where authentication is password based, creates major
inconveniences for mobile devices, where devices limitations and consumers behaviour
require an integrated, convenient and also secure solution.
Password-based authentication represents, very often, a solution vulnerable to attacks. By
adding a second factor as part of the authentication process, increased security is
achieved.
Thus, this paper proposes the implicit authentication, while using observations on user
behavior within the application.

1
* corresponding author, Senior Researcher, The National Institute for Research & Development in
Informatics Bucharest, university assistant, Department of Economic Informatics and Cybernetics, Bucharest
University of Economic Studies, Romania, zamfiroiu@ici.ro
2
Scientific Researcher, The National Institute for Research & Development in Informatics Bucharest,
Romania, carmen.rotuna@rotld.ro

338
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Considering the above mentioned reasons, the implicit authentication is particularly


suitable for mobile devices and laptops. But this authentication method may be
implemented for any type of device.
The device usage pattern varies from one person to another. This type of information can
be useful to create a more detailed profile for each user.
Implicit Authentication:
• acts as a second factor and supplements password authentication;
• acts as a primary authentication method, replacing password authentication
entirely;
• provides additional security for financial transactions such as purchases by credit
card, acting as a barrier against fraud.
The method for determining the score from previous authentications is based on the
identification and analysis of user behavior characteristics.
In the proposed model, when behavior analysis determines a level below a certain
threshold, the user is required to authenticate explicitly by entering a passcode.
The threshold that will require explicit authentication may vary for different applications,
depending on the intended security level.
There are solutions to reduce the authentication concerns (Single Sign-On - SSO), but
these identify the device, not the rightful user.
Therefore, SSO does not defend well against theft or exchanging devices, where the
devices are shared voluntarily.
According to studies on authentication process perception for mobile devices, it appears
that a transparent authentication experience is recommended, which enhances security.
Users were receptive to biometric authentication and behavioral indicators.
Implicit authentication forms are for example location-based access control, biometric
methods, dynamic typing model and keyboard shortcuts.
Recently, the accelerometers of some devices have been used for user identification and
profiling.
Implicit authentication uses a variety of data sources for authentication decisions. For
example, modern mobile devices offer rich data collections on user behavior, such as:
• location and co-location;
• accelerometer measurements;
• WiFi, Bluetooth or USB connections;
• biometric style measurements, such as entering text and voice data;
• contextual data such as calendar entries content.
Also, auxiliary, user information could be another source of data for implicit
authentication.
The mobile device itself can take the authentication decisions to determine if a password
is required to unlock the device or a given application. In this case, data can be stored

339
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

locally, which is beneficial for privacy. It is also possible to use local authentication to
access remote services, for example, using the SIM card, the user can sign and send an
authentication decision to the service provider. It must be considered, however, that
although this approach protects user privacy, it does not protect against devices theft. If
the device is stolen, an attacker can obtain the information stored in memory and find
information about the user.
All approaches, even those where data is held locally have the potential for confidentiality
breaches.
Modeling user profile should contain all his behavior patterns, for example, how
frequently he makes phone calls to numbers from phonebook or the order of placing calls
to certain phone numbers.
In general, the user model may also take into account combinations of indicators.
User behavior usually depends on the time of the day and the day of the week. People are
generally at work in the same location on working days, but their location varies during
weekends.
According to [2] standard password-based authentication is vulnerable immediately after
login, as there is no mechanism to verify continuously the user's identity. This can be a
serious problem, especially for sensitive platforms, offering facilities to their users based
on username and password only. Therefore, a method that allows user continuous
authentication is extremely helpful.
An alternative to password-based authentication method is biometric data based method.
Biometric identification methods address users identification by using their physical
characteristics (eg. the face, fingerprint, iris) or behavioral traits (ie. dynamic keyboard
shortcuts, mouse dynamics, etc.).

2. ONLINE PLATFORMS AUTHENTICATION METHODS

The most common authentication types available for online applications, differ in the
level of security provided by combining the factors involved in the process. The security
level of an application varies depending on the category of the authentication factors:
• User and password-based authentication - the most common example of
authentication is based on a single factor password authentication. The security of the
password depends on the diligence of the person who sets up an account: the system
administrator or user. Best practices include creating a strong password and ensuring
that no one can access it. One of the main issues about setting a password is that most
users either do not understand how to create strong memorable passwords, or
underestimate the need for security. Additional policies, that increase complexity,
lead to high volumes of requests for passwords related issues in the enterprise
environment. This problem can result in the use of simplistic rules for creating
passwords, and as a result, reduced length and complexity passwords tend to be used
most frequently. These passwords can be cracked within a few minutes, making them
almost as ineffective as if no password is used or if a password is written on a paper
and discovered by a malicious person. Therefore, safety measures are needed to

340
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

prevent these situations, such as creating less predictable passwords. Password


testing predicts the ease with which it could be broken by: guessing it, "brute force"
attacks, "dictionary" attacks or other common methods.
Given the increasing speed in machine processing, "brute force" attacks pose a real
threat to passwords. Using, for example, general purpose parallel graphic processing
(GPGPU) hackers can produce more than 500 million passwords per second, even
using low-performing hardware. Depending on the particular software, "rainbow
tables" are useful to reverse the cryptographic algorithms and can be used to crack 14
characters alphanumeric passwords in about 160 seconds. This is done by comparing
the password database with a table of all possible encryption keys.
Social engineering methods are also a major threat to password-based authentication
systems. To reduce the likelihood of such an attack, an organization must involve
everyone and spread awareness, from management to their employees, given the fact
that the complexity of the password has no importance, if an attacker tricks a user to
divulge it. Even IT personnel, if not properly trained, can be exploited through
invalid passwords related requests. All employees must be aware of phishing tactics,
in which fake e-mails and websites can be used to acquire sensitive information from
one recipient. Other threats, such as Trojans, can be received also in e-mail
messages. As a conclusion, password authentication is one of the easiest methods to
hack.
Password-based security may be appropriate to protect systems that do not require a
high level of security, but even in these cases, constraints should be applied to protect
them. For any system that needs increased security, stronger authentication methods
should be used.
Strong authentication is sometimes considered synonymous with multifactor
authentication. However, single factor authentication is not necessarily week in all
cases. Many biometric authentication methods, for example, are strong when
implemented properly.
• Biometric Authentication - biometric verification is considered a sub-group of
biometric authentication. Biometric technologies involved, rely on how individuals
can be uniquely identified by one or more biological distinctive features, such as
fingerprints, hand geometry, the structure of the retina or iris, voice, dynamic
keyboard usage, DNA or signatures.
Biometric authentication is based on the use of a proof of identity as part in a process
of authorizing a user to access a system. Biometric security technologies are used for
a wide range of electronic communications, including enterprise security, trade and
online banking.
Biometric authentication systems compare biometric data from user with the
authentic, verified data, stored by the system. If they are identical, the authentication
is confirmed and the access is granted. This process is sometimes a part of a
multifactor authentication system. For example, a smartphone user might connect
with his personal identification code, and then provide a retina scan to complete the
authentication process.

341
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Nowadays there are several methods for collecting and reading biometric data to
ensure strong authentication. Any of the biometric identification methods has certain
characteristics that make it suitable for use in an authentication process. Some are
fast, others can be used without the subject’s knowledge and others are very difficult
to forge.
Biometric authentication methods examples:
a. digital signature;
b. fingerprint;
c. facial recognition;
d. retina scan;
e. iris scan;
f. hand geometry;
g. voice analysis.
• Two-factor Authentication (2FA)
"Two-factor" Authentication (also known as 2FA) is a type of multi-factor
authentication based on unambiguous identification of users by combining two
different components. These components can be something the user knows,
something the user has, or something that is inseparable from the user. Two-factor
authentication requires two types of credentials before an user can connect to an
account or system, confirming that the entity that wants to access the account is
indeed the rightful user.

Figure 1. Two-factor Authentication [4]

Using this system to validate a person's identity, is based on the assumption that it is
unlikely for an unauthorized entity to provide the two factors required for access. If,
in an attempt to log in, at least one component is missing or incorrectly provided, the
user’s identity is not established with certainty and therefore the access request is
rejected.
2FA security system reinforces the fact that the rightful user must provide two items
for identification from different categories. Typically, proof of identity is composed
of two items: something memorable a security code or password and a physical
evidence, such as an identity card. The second factor authentication increases

342
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

security because even if an intruder steals a password, it should also access the
physical device to enter the user’s account.
Examples of factors involved in two-factor authentication are:
a. a password sent as text;
b. a PIN number;
c. Captcha usage.
CAPTCHA (Completely Automated Public Turing test to tell Computers and
Humans Apart) is a type of challenge-response test, which consists of reading and
reproduction of text, used to determine whether or not the user is human. This user
identification procedure has a shortcoming for those whose daily work is slowed
down by the distorted words, which are difficult to read. A person on average takes
about 10 seconds to solve a typical CAPTCHA.
Tokens are used to validate user’s identity (as in the case of a client attempting to
access the bank account). A token is similar to an electronic key accessing a system.
It is used in addition to or instead of a password, to prove that the person is who
claims to be. Some devices can also store other information, such as a digital
signature, or biometric data like, for example, fingerprints:
1. Digital Certificate;
2. Smart card;
3. USB Device;
4. One Time Password.
Two-factor authentication by implementing HOTP or TOTP:
1. HOTP (HMAC - One Time Password algorithm) described in RFC4226
standard, is based on two fundamental things: a shared secret and a moving
factor (counter). This algorithm is based on events, which means that every
time a password is generated, the moving factor will be increased based on
the events, so subsequently generated passwords should be different every
time;
2. TOTP (Time-based One-Time Password Algorithm - RFC6238) is an
algorithm that calculates a unique password from a shared secret key and
current time using a cryptographic hash function to generate a one-time
password.
• Multi-factor Authentication
Multi-factor authentication (MFA) is a method of access control where a user is
granted access only after providing, several separate items from the following
categories, in an authentication process: something a user knows (knowledge factor),
something a user has (factor possession) and something the user is (factor inherence).
In multi-factor authentication use case, several factors are used to enhance the
security of transactions compared to two-factor authentication.

343
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 2. Multi-factor Authentication

• Three-factor authentication (3FA)


Adds another security factor and prevents counterfeiting authentication. Typically, a
biometric feature measurement is added. Such a system checks, for example if the
person intending to login knows the password, possesses the identity card and if
fingerprint match the stored records.
• Four-factor authentication (4FA)
4FA authentication increases security through the use of four unique factors for
authentication. It turns the intent to compromise an account into an impossible
mission, since a hacker should be using a portable device to break a password, while
connected to a USB token cloned, and to match the rightful account owner retina
scan and fingerprint.
• Five-factor authentication (5FA)
A five factors based authentication system is based on the three factors frequently
used (knowledge, possession and inherence), plus location and time. In such a
system, a user must reproduce something he knows or remembers, prove that it
possesses an item with authentication capabilities, provide a biometric sample, his
location must be correct, and all this in a timeframe accepted and verified in order to
gain access to the system.

3. MOBILE WEB APPLICATIONS

Web applications for mobile devices are software applications developed to run in
browsers for mobile devices, with restrictions concerning hardware resources and
software resources.
Mobile devices provide users the benefits of connecting to the internet anywhere and
anytime.

344
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The specific characteristics of web applications for mobile devices are designed in
accordance to mobile devices limitations:
• the size of the display screen - a usual web application is developed for a classical
computer, which has a larger monitor size; for mobile devices, web applications
must be adjusted to the display size of the device used;
• resolution is very important for images and text displaying; text within the web
application must be readable also from mobile devices;
• connecting to the Internet - when mobile devices are connected to the Internet via
wireless, the user is likely to move out of network range, so internet connection is
lost, therefore, web applications must coordinate this action and not require a
permanent connection.
Following an empirical analysis, a list of recommendations for designing mobile web
applications was developed, presented in Table 1:
Table 1. Recommendations for mobile devices web applications
Hierarchy The division of displayed information, so that it may be rearranged
according device's screen size
Highlighting the important content
The user must not press more than 2-3 times to get the desired information
Links Assign shortcut keys for each link on a page
If a link used within the web application is not usable on a mobile device,
the user should know about this situation
Ability to automatically dial a phone number that is written in the web
application content
Navigation Minimizing scrolling process through web pages of mobile applications
Positioning the most used sections at the top of the page to be readily
accessible for mobile users
Include navigation buttons at the top of each page
Include navigation buttons at the bottom of each page
Footnote Include a link to the desktop version of web application
information Include a link to the feedback page
Page titles, Page title and links must not exceed 15 characters
navigation links, Use only alphanumeric characters
and URL
Page content Highlighting the important content
The most important topics are positioned at the top of the web
application’s main page
Page Do not use frames or tables
arrangement Do not use absolute sizes
Forms Use the dropdown lists, radio buttons and checkboxes to minimize user
interaction with the application
Use default values, where possible
Images and Use reduced dimension pictures
colors For spacing do not use graphics or animation.
An image should not exceed 80% of the device screen width
Screen size Two sites with different sizes for mobile and desktop devices are
recommended

345
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

All mobile devices browsers share the following characteristics:


• Bookmarks to save important user pages;
• Save page option, used to save web application pages in device memory so the
user can access these pages even when the device cannot connect to the Internet;
• History, saving the last web page browsed on that drive;
• web page full screen mode so that web page content can be displayed on a larger
area of the mobile device;
• increase or decrease content font size, for users who want larger text content or
want to view more content in a single page;
• return button to previous page of the web application;
• based on the web applications characteristics and user interaction mode, can be
determined the interaction characteristics which will help create user profiles.

4. ANALYSIS OF USERS BEHAVIOR CHARACTERISTICS

In web applications for mobile devices case, we can take into account specific
characteristics of traditional web applications, but also other mobile-specific features such
as:
• text typing speed; the speed is significantly different on mobile devices compared
to typing on a computer keyboard and varies from one user to another;
• the area covered when typing; each user has a push pattern on the mobile device,
depending on the size of the user's fingers; this feature depends also on the user's
physical appearance;
• the amount of time a key is pressed; alike the case of a computer keyboard, it
should be measured how much time a specific button is pressed on the virtual
keyboard;
• how the keyboard display is closed when no longer necessary; the user can
achieve this through touch action, just outside the keyboard area or by using the
device’s botton for leaving the current activity, in this case the virtual keyboard;
• touch screen area to run (scroll) a page, or a text; similar to the area where a user
holds the cursor, mobile device screen is divided into several sectors, saving the
sector used to run the page content within the app, Figure 3 and Figure 4.

1 2
Figure 3. Dividing the screen into two sectors to run the page

346
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

123
Figure 4. Dividing the screen into three sectors to run the page

• zooming required to read text; each user prefers a certain degree of text
magnification, so that it reads comfortable the text displayed in the application;
• editing mode; the user can use a single finger, two fingers from different hands, or
use multiple fingers to write text using the virtual keyboard; this characteristic
applies only to users who use devices with virtual keyboard; for other devices
with physical keyboard it may not apply;
• how the user holds the mobile device when reading (landscape or portrait);
• how the user holds the mobile device when writing (landscape or portrait).
These characteristics must be measured for all online application users and, based on
these measurements, achieve a profile for each user.
For each property in the set, a series of measurements will be conducted after the user is
authenticated and a working session is created. The results are then saved in a database as
presented in table II.
Table 2. Measurements realized for a t number of sessions

Session TS CK ZRT RM WM
S1u TS1u CK1u ZRT1u RM1u WM 1u
S 2u TS u2 CK u2 ZRT2u RM u2 WM u2
… … … … … …
u u u u u
S i TS i CK i ZRT i RM i WM iu
… … … … … …
S tu TS ut CK ut ZRT tu RM ut WM ut
Where:
S1u – is the session 1 for the user U<
TS – text typing speed;
CK – how the keyboard display is closed;
ZRT – zooming required to read text;
RM – how the user holds the mobile device when reading;
WM – how the user holds the mobile device when writing.

347
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

5. CONCLUSIONS

In this study an analysis of the current state of authentication domain within online
applications was conducted.
A research was conducted on how users interact within web applications for mobile
devices. Further, were determined characteristics of user interaction in online
applications. In the end of the study it was performed an analysis on the user behavior
characteristics.
These characteristics are measurable, and optional to be included in user behavior analysis
module.
In a future research, models concerning the user profile based on the identified
characteristics will be developed. The models will vary depending on the metering model
and the capabilities of the web platforms developed using different programming
languages.

ACKNOWLEDGMENT

This work was been carried out as part of the Nucleu project: PN 16 09 01 02 – Cercetări
privind autentificarea online în cadrul aplicaţiilor software bazată pe comportamentul
utilizatorilor (Research on online authentication in software applications based on users
behavior).

REFERENCES

[1] E. Shi, Y. Niu, M. Jakobsson, R. Chow, Implicit Authetication through Learning


User Behavior, Information Security. Springer Berlin Heidelberg, 2011. 99-113.
[2] J. Roth, On Continuous User Authentication via Typing Behavior, IEEE
Transactions On Image Processing, July 28, 2014.
[3] V. R. Yampolskiy, Action-based user authentication, Int. J. Electronic Security and
Digital Forensics, Vol. 1, No. 3, 2008.
[4] FTC seeks public comments on facial recognition, 2012,
https://crisisboom.com/2012/01/10/ftc-seeks-public-comments-on-facial-
recognition/
[5] Fingerprint sensors, facial recognition and biometric surveillance to propel
biometrics market, http://www.donseed.com/4278-2/
[6] IBTimes, 2015, UN: Biometric iris scanners transforming Syrian refugee
programme by preventing fraud, http://www.ibtimes.co.uk/un-biometric-iris-
scanners-transforming-syrian-refugee-programme-by-preventing-fraud-1527362
[7] 5 Things You Should Know About the FBI’s Massive New Biometric Database,
2012, https://crisisboom.com/2012/01/11/fbi-biometric-database/
[8] NIST Authentication Guideline. 2016, https://pages.nist.gov/800-63-3/sp800-63-
3.html#sec4

348
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[9] Strong Authentication Best Practices, https://safenet.gemalto.com/multi-factor-


authentication/strong-authentication-best-practices/
[10] Biometric authentication: what method works best?,
http://www.technovelgy.com/ct/Technology-Article.asp?ArtNum=16
[11] Understanding Digital Certificates, https://technet.microsoft.com/en-
us/library/bb123848(v=exchg.65).aspx
[12] Retina scan, http://whatis.techtarget.com/definition/retina-scan

349
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

AN IMAGE COMPRESSION SCHEME BASED ON LAPLACIAN


PYRAMIDS

Catalin Ispas1*
Costin-Anton Boiangiu 2

ABSTRACT

In this paper, we propose an image compression scheme based on the Laplacian image
pyramids. First, the image is split into four sub-images repeatedly, in order to find an
optimal split position based on their variance. The split point slides between a starting
point and ending point and it’s stored at every step. After finding the optimal split point,
the four sections determined by it are used to build four image pyramids, one for each
sub-image. The data of each pyramid is stored in a custom file format and is compressed
using BZ2.

KEYWORDS: image compression, image resampling, summed area tables, Laplacian


pyramid, bzip2, Lanczos filter, generalized pyramid

1. INTRODUCTION

Image compression has become an important subject nowadays, with the increase of
content that can be found on websites or on the Internet, such as static images or videos
and with the advent of mobile devices and video streaming. The desire to make an eye-
catching website also brings the problem of transmitting that content in a timely manner,
since users don’t like to wait for it to be delivered most of the times or expect the content
to be delivered in an imperceptible timeframe.
Laplacian pyramids as means of image compression were introduced by Peter J. Burt and
Edward H. Adelson, in the paper “The Laplacian Pyramid as a Compact Image Code” [4].
A Gaussian pyramid (figure 1) is built by repeatedly downsampling the original image,
then the Laplacian pyramid is constructed by calculating the difference between the image
on level L of the Gaussian pyramid and the upsampled version of the one at level L+1 [1].
Each error image resulting out of this difference is a level in the Laplacian pyramid.
Compression is achieved by quantizing the pixel values in the error images. The original
image can be recovered by upsampling and summing all the levels of the Laplacian
pyramid [4].

1*
corresponding author, Engineer, ”Politehnica” University of Bucharest, 060042 Bucharest, Romania,
ispas.catalin@ymail.com
2
Professor PhD Eng., , ”Politehnica” University of Bucharest, 060042 Bucharest, Romania,
costin.boiangiu@cs.pub.ro

350
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. Graphical representation of a Gaussian pyramid

For image compression, a similar scheme was proposed by Costin Anton Boiangiu et al.
in the paper “A Generalized Laplacian Pyramid Aimed at Image Compression” [6], where
a scanning pattern is used to traverse the input image during the processing phase, in
order to group similar pixels, which might help to obtain better compression, when a
residual encoding algorithm such as Run-Length Encoding is used. Then a cross point is
sought, to create four sections of the input image, which will be utilized to construct four
Laplacian pyramids.
The concept of using Laplacian pyramids for image compression is further expanded for
video data, by Adrian Enache and Costin Boiangiu, in the paper “A Pyramidal Scheme of
Residue Hypercubes for Adaptive Video Streaming” [5], where „hypercubes are built as
residues between successive downsampling and upsampling operations over chunks of
video data”.
This paper proposes an image compression scheme based on splitting the original image
into four sub-images, each encoded into its corresponding Laplacian pyramid. Splitting
the image serves the purpose of separating the “negative spaces”, in order to obtain better
compression.

2. THE PROPOSED METHOD

The first step of this proposed method is to search for an optimal point inside the input
image, in order to split it into 4 sub-images. The search process starts off in the upper left
corner of the image and ends in the lower right corner. The start point and end point
are defined as in (1) and (2), where w is the width and h is the height of the image:

351
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

(1)

(2)

to avoid special cases in which a sub-image is too small to make any additional operations
on it or its size is zero.
Once these points are obtained, the input image is traversed from left to right, top to
bottom. At each step, four sub-images are formed and the variance value for each is
calculated using the formula:
(3)

(4)

where and are the mean value and number of pixels of sub-image “j”. The product of
these four values is calculated and stored for later use. Due to performance reasons, a
summed area table (integral image) was used to compute the four variance values (it’s
also worth noting that the input image is treated as a 1D array).

Figure 2. Calculating the sum (represented as S) for a sub-image s1, by using the values in the
summed area table T, recovered from the indices A, B, C and D (where w and h are the width and
height of the input image, s1.w and s1.h are the width and height of the sub-image s1, s1.x and s1.y
are the (x, y) coordinates of the top left corner of sub-image s1 and T is the summed area table)

Two tables are pre-computed from the input image: an integral image for simple sums and
one for squared sum values. Instead of pixel intensity values, an integral image contains
values which are the “the sum of the intensities of all pixels contained in the rectangle
defined by the pixel of interest and the lower left corner of the texture image” [2]. For

352
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

example, to determine the sum from relation (4) for a particular sub-image, only a sum
and two differences of values from the summed area table are necessary (see Figure 2).
After the traversal has finished, the lowest variance product from the ones that were
stored is utilized to pick the optimal split point (see Figure 3).

Figure 3. Sub-images resulting from splitting the input image at the optimal split point

The next step, once the optimal split point has been found, is to generate four Laplacian
pyramids, one for each sub-image. In the case of one pyramid, the sub-image is first
downsampled (and then upsampled) and the difference between the original image and the
resampled one is computed (figure 4). This procedure is repeated until the size of the sub-
image has reached 1 pixel (or the compression scheme has become too inefficient, due to
the fixed data overhead added on every stored level) then the process stops.

353
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 4. Obtaining a residue image R from the images I and J (produced by downsampling and
upsampling I), this represents one level of the Laplacian pyramid

The motivation behind this step is that, although more sample elements that in the input
image are produced, most of the sample values “tend to be near zero, and therefore can
represented with a small number of bits” [3]. BZ2 [8] was chosen to compress the error
images which resulted after this step, because one of the compression techniques it uses is
the Burrows-Wheeler transform [9], which further improves compression when it comes
to repeating sequences of values, which might exist in this case. No further processing is
done on the residues, each row of an individual error image is stored in the order
described by a raster scanning pattern.

Custom file formats

Two custom file formats were used: one for the image pyramid and one for storing all the
resulting four image pyramids. The PIFF (Pyramid Image File Format) contains
information such as: the number of pyramid levels, an array with the widths and heights
of the residues found at each level, an array containing the pixel data of all the residues
and the array’s number of elements. Because the error images resulted during the pyramid
construction phase can contain negative values, their pixel data is stored as short integers.
The residue dimensions are kept in the following manner: odd index values contain the
heights and even index values the widths of the residues (Figure 5).

354
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 5. Graphical representation of the array in which the residues dimensions are stored

The MPIFF (Multi-Pyramid Image File Format) contains the original width and height of
the input image and four pyramid structures like the one described earlier (Figure 6).

Figure 6. The structure of the Pyramid Image File Format (up) and the Multi-Pyramid Image File
Format (down)

4. OBTAINED RESULTS

During the testing phase, four grayscale versions of the following images were used:
Lena, Baboon, Peppers and Jellybeans [7], all in uncompressed format (figure 7). For
decimating the images, during the image pyramid construction phase, a scaling factor of 4
was used and 3 types of filters were tested: Nearest Neighbour, Cubic and Lanczos. The
compression algorithm used is BZ2.

355
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 1. Performance of the proposed image compression scheme

Image Compression Compression


Resolution Raw size Filter
name (bytes) factor
Nearest
207 955 1.2703132
neighbour
Lena 512x512 257 kb
Cubic 183 140 1.4424374
Lanczos 184 334 1.4330942
Nearest
252 785 1.0450303
neighbour
Baboon 512x512 257 kb
Cubic 238 876 1.1058792
Lanczos 240 100 1.1002415
Nearest
210 266 1.2563514
neighbour
Peppers 512x512 257 kb
Cubic 189 912 1.3910021
Lanczos 191 396 1.3802169
Nearest
37 477 1.7975825
neighbour
Jellybeans 256x256 65.7 kb
Cubic 33 148 2.0323398
Lanczos 33 868 1.9891342

356
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 7. The four images used to test our compression scheme (left to right, top to bottom):
Baboon, Lena, Peppers, and Jellybeans (Source: The USC-SIPI Image Database)

5. CONCLUSION

In this research paper, an image compression scheme based on Laplacian pyramids was
proposed. The best results were obtained on images that contain negative spaces, such as
the image “Jellybeans”. The algorithm could be further improved by introducing space
filling curves, which could help attain better compression, by grouping similar pixels [6].
Also worth trying is finding a better way of encoding negative values, rather than using
short integers to store negative values.

REFERENCES

[1] Jeff Perry, Image compression using Laplacian pyramid encoding, C/C++ Users
Journal, volume 15, issue 12, pp. 35-47, Dec. 1997.
[2] Franklin C. Crow, Summed area tables for texture mapping, Computer Graphics,
volume 18, number 3, pp. 207-212, July 1984.

357
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[3] E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt , J. M. Ogden, Pyramid


methods in image processing, RCA Engineer, Volume 29-6, pp. 34-41 Nov/Dec
1984.
[4] Peter J. Burt, Edward H. Adelson, The Laplacian Pyramid as a Compact Image
Code, IEEE Transactions on Communications, volume 31, issue 4, pp. 532-540,
April 1983.
[5] Adrian Enache, Costin Boiangiu, A Pyramidal Scheme of Residue Hypercubes for
Adaptive Video Streaming, International Journal of Computers and
Communications, volume 8, pp. 128-133, 2014.
[6] Costin Anton Boiangiu, Marius Vlad Cotofana, Alexandru Naiman, Cristian
Lambru, A Generalized Laplacian Pyramid Aimed at Image Compression, Journal
of Information Systems & Operations Management, volume 10, number 2, pp.327-
335, December 2016.
[7] The USC-SIPI Image Database, USC University of California, Available online,
retrieved from: http:// sipi.usc.edu/ database/ database.php? volume=misc,
Accessed at: 30 May 2017.
[8] BZIP2 Homepage, Retrieved from: http:// www.bzip.org/ index.html, Accessed at:
30 May 2017.
[9] M. Burrows, D.J.Wheeler, A Block-sorting Lossless Data Compression Algorithm,
Digital Systems Research Center, Research Report 124, May 1994.

358
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

USING SOFTWARE PACKAGES TO ANALYZE THE VULNERABILITY


OF CULTURAL HERITAGE BUILDINGS

Camelia Slave 1*
Mariana Coanca 2

ABSTRACT

The seismic protection of buildings and masterpieces which have ecclesiastical and
monumental architecture calls for further actions. The paper presents the construction
planning of trilobite churches and the evolution of the vulnerability concept regarding the
Romanian cultural heritage buildings, focusing on the role of geometry in preventing
seismic damages and how the program ROBOT Millenium can be used to analyze a
masonry structure. The examples provided in the paper reveal the fact that geometry is a
measure of induced intelligence and it plays an essential role in preventing rotation.

KEYWORDS: churches, hazard, seismic risk, software package, vulnerability.

1. INTRODUCTION
The fundamental concepts developed by UNDRO-1979 (United Nation Disaster Relief
Coordinator) applied in seismic risk assessment systems developed by EAEE (European
Association on Earthquake Engineering) are based on specific mathematical concepts and
provide the necessary conditions for thorough analyzes of seismic hazard and seismic
vulnerability associated with the seismic risk;
Unanimously accepted, the link between seismic risk (SR), seismic hazard (SH) and
seismic vulnerability (SV) can be expressed by means of formal relationship:
SR=SH × SV
From the interpretation of the above relationship it results that the associated seismic risk
(SR) of both the locations and the exposed objectives is based on the combination of
seismic hazard (SH) and seismic vulnerability (SV), expressed in terms of maximum
acceleration (PGA) (PGV), maximum displacement (PGD) of ground area, and maximum
spectral values of absolute accelerations (SA), relative speeds (RS) and relative
displacements (RD).
The Seismic Vulnerability (SV) means the destructive effects caused by a strong seismic
action on exposed elements or systems built. Vulnerability can be expressed through
specific source parameters (focal mechanism), the ground motion from the site (local

1
* corresponding author, Lecturer, Ph.D., University of Agronomic Sciences and Veterinary Medicine,
Bucharest, Romania, camellia_slave@yahoo.com
2
Lecturer, Ph.D., Romanian-American University, Bucharest, Romania, coanca.mariana@profesor.rau.ro

359
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

conditions), and the structural feature of exposed elements (conditions for seismic
protection of buildings). Therefore, we can say that the seismic vulnerability has a random
nature as the action of a particular earthquake can cause structural damage to all buildings
in the same location.
A recent study proposes a new and effective type of composite bonding as a temporary
seismic intervention for quickly protecting masonry structures against aftershocks, giving
time to authorities for making decisions on a proper permanent repair intervention of the
heritage structure. This innovative composite strengthening system is based on highly
deformable adhesives made of special polyurethane that can be applied very quickly and
is mechanically removable; therefore, it could play an important role during interventions
on historical structures (A. Kwiecie´n et ali., 2016).

2. SEISMIC PHENOMENA

2.1. Seismic Vulnerability (SV)

Defining seismic vulnerability displays a variety of aspects:


- It is the damage that will be faced by a building or its exposure to the action of an
earthquake. Therefore, the seismic vulnerability can be defined as a relationship between
the intensity of the action and the level of seismic damage that is expected to occur;
- It is based on the susceptibility of exposed elements, or a system built exposed to
suffer damage or certain specific losses due to the incidence of an earthquake, and
expressed in terms of probabilistic or statistical terms;
- It shows the degree of loss of a given element at risk, or a group of such elements, due
to the occurrence of a natural phenomenon of a given magnitude, expressed in relation to
a certain scale;
- In order to assess the seismic vulnerability, specific details are needed for statistical-
probabilistic analyses and/or approaches of structural engineering, economic analyses,
etc.;
- It is defined as the sum of damages, loss of life due to the degree of intensity of a place
or an area;
- Estimating the vulnerability of a structure means linking the seismic risk and intensity
of the expected earthquake to the level of structural damage if the earthquake occurs
(Slave, 2010).

2.2. Seismic Risk (SR)

The notion of seismic risk is complex. There is a practically generalized inconsistency


regarding the definition of the seismic risk concept in most specialized papers published
in Romania and abroad as shown below:.
- It represents the degree of expected loss, in a probabilistic sense, caused by a natural
phenomenon, depending on the natural hazard, degree and duration of exposure;

360
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

- It is the expected number of lost lives, injured persons, property destruction, disruption
of economic activities due to a natural phenomenon being, as a result, the consequence of
a specific risk;
- Risk is considered as an anticipation of losses and other negative events, assessed on
the basis of existing consequences;
- The seismic risk is the synthetic characterization of the expected sequence of
occurrence cases: (a) effects of different degrees of gravity (damage cases, various
losses), (b) degradation of a system of exposed elements that are object of analysis and (c)
a current situation regarding the seismic vulnerability of various elements and their
exposure;
- It is likely that the social or economic effects expressed in money or victims exceed
the expected values in a given place and time;
- It represents the possibility of incidence of a negative social impact;
- It is defined as the probability of occurrence of expected adverse effects due to
earthquakes over the lifetime of a construction. The expected adverse effects include loss
of life, economic and social disturbance, and damage to the physical state of construction;
- It consists in the probability of producing a disastrous event, of a certain magnitude, at
a given place and within a given time;
- The seismic risk is proportional to both the frequency of occurrence of the disastrous
phenomenon considered and the extent of its consequences for the population, the
environment and the technological infrastructure;
- The seismic risk that is accepted by society is regulated at government level, by
establishing the hazard areas, enacting the rules and regulations of construction, and by
imposing measures regarding spatial planning.

2.3. Seismic Hazard (SH)

Several definitions of the seismic hazard from the existing literature in the field of
Engineering Seismology and Geophysics are presented below:
- It characterizes the likelihood of an earthquake with destructive potential in the site
chosen for a building, throughout its lifetime. In this regard, the seismic hazard can be
defined as a relation between the degree of seismic damage and the probability of
manifestation of an intensive seismic movement;
- It represents the expectation of a series of seismic events which, according to the
methodological needs, can be considered in relation to the sources or the locations which
correspond to a Poisson process, starting from the premise of the probabilistic
independence of different earthquakes;
- It is the probability of occurrence of a potentially destructive natural phenomenon
within a specified time frame in a specified area;
- It provides a synthetic characterization of the expected sequence of seismic events
with different levels of severity by using probabilistic concepts;

361
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

- It is defined as the probability of an earthquake of a certain magnitude at a specific


place and time;
- It represents the natural physical degree of exposure of a particular geographical area;
- It is the likelihood of occurrence when a certain level of maximum acceleration is
exceeded within a certain timeframe;
- In order to characterize the seismic hazard, it is necessary to specify for each case
study whether there are considered several types of events, surface earthquakes or
medium depth earthquakes;

2.4. The construction planning of trilobite churches

The oldest monumental buildings preserved in the Carpathian-Danubian-Pontic region are


churches. For centuries, they have been the most representative masterpieces which boast
ecclesiastical and monumental architecture. These have always been Orthodox churches,
because Romanians are the only Latin nation of Orthodox religion, while all other peoples
of Latin origin are Catholic. Being erected in stone and brick, these Eastern Balkan-
Byzantine-style Churches have always been the proof of the level of technical knowledge,
cultural receptiveness and artistic refinement achieved during their time (Sofronie, Popa,
1999).
As a general rule, the Byzantine model was based on the standard Greek system, with the
right-cross type, rectangle inlaid and dome supported on pendants or pillars. However,
they were creatively adapted to the regional traditions of secular architecture. Moreover,
these Orthodox churches still reflect the foreign influences on the native art of buildings.
Originally made of wood, in the fourteenth century, trilobite churches were made of stone
and brick. At the beginning, these churches were provided with a unique bell tower, called
Pantokrator. Later, two, three or four towers were added to decorate the ecclesiastical
monuments. Sometimes one of the rear turrets is used as a bell tower and /or as an oriel.
The two geometrical characteristics of the turrets are the outer diameter D and the height
H from the base to the top of the masonry dome, while their slenderness is defined by the
D / H aspect ratio, which is usually between 1/2 and 1/3.25. The size of the trilobite
churches is quite small, like that of a two or three-storey building (Sofronie, Popa, 1999).
Under seismic phenomena, some of these churches were dramatically damaged or even
destroyed. The trilobite shape is far from being the most suitable for churches to face the
earthquakes. However, they have been faithfully preserved over the centuries.

3. CASE STUDY –THE ARGES MONASTERY

Harun emphasizes the fact that the concept of heritage is invariably. UNESCO s
Convention Concerning the Protection of the World Cultural and Natural Heritage (1972)
which has defined cultural heritage by the following classifications (Harun, 2011: 42-43):
- Monuments: architectural works, works of monumental sculpture and painting,
elements or structure of an archeological nature, inscriptions, cave dwellings and
combinations of features, which are outstanding universal value from the point of view of
history, art or science;

362
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

- Groups of buildings: groups of separate or connected buildings which, because of their


architecture, their homogeneity or their place in the landscape, are of outstanding
universal value from the point of view of history, art or science.
- Sites: works of man or the combined works of nature and of man, and areas including
archeological sites which are of outstanding universal value from the historical, aesthetic,
ethnological or anthropological points of view;
One of the most representative trilobite churches in Romania is The Argeş Monastery. It
was built in six years, between 1512 and 1518, under the reign of Neagoe Basarab, as a
church mausoleum, where the prince and his family were buried. Later, a monastery was
built around the church. Due to its special beauty and fame, many other churches were
modeled in a similar way. Today it is an Episcopal Church and a heritage monument

3.1. ROBOT Millennium Software Package

For the determination of gravity centers (GC) and rotation centers (RC), the structure was
analyzed using ROBOT Millennium. It is a single integrated program used for modeling,
analyzing and designing various types of structures. The program allows users to create
structures, to carry out structural analysis, to verify obtained results, to perform code
check calculations of structural members and to prepare documentation for a calculated
and designed structure.
The most important features are: number units and formats (dimensions, forces,
possibility of unit edition) materials (selection of material set, according to the country
and the possibility of creating user-defined material) section database (selection of the
appropriate database with member sections) structure analysis parameters (selection of the
static analysis method and definition of basic parameters for dynamic and non-linear
analysis; selection of analysis types, possibility of saving results for seismic analysis –
combination of seismic cases).
The menu consists of two parts: a text menu and toolbars with appropriate icons. They
can be used interchangeably, according to the users’ needs and preferences. Both are
displayed in the same way - as a horizontal bar at the top of the screen (additionally, for
some layouts in the ROBOT Millennium system, another toolbar with most frequently
used icons is displayed on the right side of the screen). Basic options available within the
modules are accessible both from the text menu and the toolbar. (Figure 1)
The second method of work with ROBOT Millennium is by using the special layout
system. ROBOT Millennium has been equipped with a layout mechanism that simplifies
the design process. The layouts in ROBOT Millennium are specially designed systems of
dialog boxes, viewers and tables that are used to perform specific defined operations.
Layouts available in ROBOT Millennium were created to make consecutive operations
leading to defining, calculating, and designing the structure easier.

363
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. The Robot Millennium interface


(Source: https://i.ytimg.com/vi/m1cf4hv_EMQ/maxresdefault.jpg)

3.2. The Sacrifice Myth and Further Analysis

According to the famous legend of Negru Voda, a Romanian prince ordered a masonry
team to build a church on the upper course of the Arges River at the foot of the
Carpathians. The work was carefully organized and well-structured until the walls began
to rise. Then, surprisingly, everything that was built daily, collapsed overnight. Only after
the sacrifice of the head mason’s wife was the church built entirely. The beauty of the
church charmed the Prince. To prevent masonry from building more beautiful churches,
he decided to sacrifice the head mason and his masonry team (Eliade,1943).
The legend is meant to convey certain professional standards, but only to clever people
who could understand their true meaning. Indeed, if the ideas presented above are
carefully analyzed, the ancient concept of durability can be expressed as follows:
a. The mysterious forces that cause the destruction of the church walls overnight can
only be those caused by earthquakes. Therefore, in seismic areas the construction
site should be carefully chosen. That depends mainly on the seismic hazard of the
site and the seismic risk as well. The most important source of information comes
from the location history. It is a long tradition to build churches and monuments
on the hills or local heights where the seismic intensity is somewhat smaller. This
rule was applied in Bucharest, when a location with a lower seismic intensity was
chosen for The Palace of Parliament (Sofronie, Popa, 1999).
b. The idea of centering the church from the very beginning is, in principle, related to
the need for balance. Each church has two intrinsic centers, the former center of
mass or gravity, and the latter center of rotation or rigidity. Both centers, GC and
RC, lie on the longitudinal axis of symmetry. Inertial forces induced by earthquakes

364
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

are applied in the CG and in compliance with the RC of torsion as a whole. Going
up, the seismic action reaches the turrets and its own inertial forces produce
shearing. The seismic vulnerability of churches depends on the relative position of
the two centers. According to the Eurocode 8, ENV 1998-1-2 Part 1-2, for seismic
protection, the distance between the two centers, which is, in fact, the eccentricity
of seismic forces, should be reduced to less than 10% of the length of the church. In
the case of The Argeş Monastery, it was found the following (Sofronie, 2004):
In the areas of fullness:
X_M = 9.184 m
Y_M = 5,500 m
X_R = 8.736 m
Y_R = 5.500 m
The difference between X_Msi X_R is 9.184-8.736 = 0.448 m
The length of the church is 20.4 m (between the axes of the walls), so the eccentricity
between X_Msi X_Resse of 2.2% <5% <10%
In the areas with voids:
X_M = 9,000 m
Y_M = 5,500 m
X_R = 8.250 m
Y_R = 5.500 m
The difference between 〖X〗 _M and X_R is 9.00-8.25 = 0.75 m
The length of the church is 20.4 m (between the axes of the walls), so the eccentricity
between XM and XR is 3.7% <5% <10%.

Figure 2. Positions of centers of gravity and Figure 3. Positions of centers of gravity and
rotation in areas with fullness rotation in areas with voids

The shape highlights the geometry of the church towards the building materials.
Geometry means not just creating a balance, but also plays an essential role in preventing
the rotation. It is also a measure of induced intelligence. In the case of the Arges
Monastery, the sacrifice myth had geometric consequences. Indeed, in order to create
space for the sacrificed person’s body, the head mason was forced to change the original

365
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

form of the church by extending the narthex. For reasons of symmetry, both parts of the
church were changed identically. In this way, the relative position of the two centers
changed. The general torsion phenomenon has been modified accordingly.
Oriental churches which comply with the Trilobite Plan were sanctified during the
sixteenth and seventeenth centuries when religion played a decisive role in all European
societies. This typical form was developed for structural reasons, as well as by the use of
plastic behavior of masonry as a building material. The shape allowed physical
connections between curved and straight surfaces, from horizontal to vertical planes.
Trilobite churches are spatially balanced buildings as they are able to safely handle
gravitational and side effects, in accordance with the durability requirements. Balance is
the main condition of aesthetics and beauty (Sofronie, 2004).
The original planning is shown in Figure 4. The two intrinsic centers of GC and RC are
behind the central axis, and as dimensions, given the distance between them; they are
below 1.27% of the total length of the church. It is even smaller than the accidental
eccentricity which, according to the same provision of Eurocode 8, is 5% of the total
length. The two centers almost overlap (the difference between them is 45 or 75 cm
(depending on the section - full or void). However, the result is far from being as
favorable as it appears.

Figure 4. The original trilobite planning Figure 5. The extended trilobite planning

Indeed, for the same moment of torsion, the shear forces are inversely proportional to the
distances from the RC to the extreme points. Since the distance from RC to B is smaller
than RC to A, curved walls would be more exposed to seismic actions. That explains why
so many trilobite churches have cracks on their apses and measures for their restoration
must be taken.
By expanding the narthex as in the case of The Argeş Monastery, things change
essentially. RC passes on the other side of the GC on the symmetry axis. What happens in
this case is that the eccentricity increases by 2.2-3.7% of the total church length, but
remains less than 5-10%, as recommended by the EC8 and P100-1/06. At the same time,
the distances from the RC to the apex of the walls increase substantially, while the shear
forces decrease. In the particular case of The Argeş Monastery, the extent of shear forces
is reduced by about 32%. That explains why the apses of the trilobite designed churches
were without exception so well-preserved. The same cannot be said about their turrets.
Most of them are cracked on the circumference, always in the same place, namely at the
beginning of the arches.

366
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

4. CONCLUSIONS

The seismic hazard (SH) can be defined as a potential threat, mainly due to the seismic
action (generated by the natural tectonic or volcanic activity), to the phenomena of
instability of the geological masses (landslides, collapses in karst areas or mining) or
surface geomorphologic phenomena (liquefaction of sands, huge water accumulations by
large dams). The seismic vulnerability (SV) refers to the destructive effects of strong
seismic actions on exposed elements or built systems. Vulnerability can be expressed
through source-specific parameters (focal mechanism), site movement (local conditions),
and structural feature of exposed elements (conditions for antiseismic protection of
structures). Therefore, it can be said that the seismic vulnerability is of a random nature,
because the action of a certain earthquake can cause substantially different structural
degradations on identical constructions situated in the same location.
The seismic hazard (SH) refers to the potential causes that can lead to disasters, while the
seismic risk (RS) is specific to the negative effects of the earthquake. The seismic hazard
(SH) is independent of man’s interventions and, therefore it cannot be modified or
diminished while the seismic hazard (SH), which is the consequence of hazard, can be
substantially reduced by competent interventions on the seismic vulnerability of the exposed
elements or built systems. A high seismic vulnerability indicates a low level of resistance to
seismic actions or low antiseismic protection. Reduced or limited seismic vulnerability can
contribute substantially to reduce seismic risk. The devastating effects of earthquakes can be
reduced by correctly estimating the seismic hazard and the seismic vulnerability of the built
environment through statistical and probabilistic analyses. On this basis, seismic risk
scenarios, as well as defensive measures designed to reduce human and material losses, are
being developed to cope with major and exceptional events. The notion of seismic risk is a
probabilistic concept that includes material damage and loss of life.
By coincidence, in the early years of building the Basilica of St. Peter in Rome, The
Arges Monastery in Wallachia was built in the form of a trilobite Greek cross. Prior to
consecration, both had technical problems, St. Peter’s Church, with the gravity, while the
church of The Argeş Monastery with those mysterious forces occurring only during the
night. In the former case, the problem was solved scientifically. By applying the most
advanced theories of that era, after more than two centuries the cracks of the dome were
repaired. The Argeş Monastery has resisted for almost five centuries. It is the proof of
what today is called seismic protection.
At the end of the commentary on Manole’s legend, Mircea Eliade mentions that in
antiquity it was a tradition that some professional rules should be kept secret. That is why
he admits that the myth of the sacrifice has an esoteric character. The legend is intended
to convey certain professional gold standards, but only to people who could understand
their true meaning.
The antiseismic design of masonry and monument buildings is a top priority because
human lives must be protected and the cultural heritage buildings must be carefully
preserved. Site data collection, test results, and numerical models are used to develop
accurate and realistic methods for assessing the structural performance during
earthquakes. The above example highlights the role of geometry in preventing seismic

367
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

damage. The message attributed to the myth of the sacrifice seems to consist in the idea
that trilobite churches are paraseismic masterpieces.

REFERENCES

[1] A. Kwiecie´n, M. Gams, A. Viskovic & B. Zaja˛c. (2016). Temporary and


removable quick seismic protection of weak masonry structures using highly
deformable adhesives. In, Van Balen, K., Verstrynge, E. (Eds). (2016). Structural
Analysis of Historical Constructions: Anamnesis, Diagnosis, Therapy, Controls.
London: CRC Press, pp. 1528-1535.
[2] Cod de proiectare seismică – partea A III-A – Prevederi pentru evaluarea seismică a
clădirilor existente indicativ P 100-3/2008 (Seismic Design Code, Part III-A-
Provisions for seismic evaluation of existing buildings indicative P 100-3/2008)
[3] Eliade, M. ( 1943). Comentarii la legenda Meşterului Manole, Publicom, Bucureşti.
[4] ENV 1998 Eurocode 8: Design of structures for earthquake resistence.
[5] Harun, S.N. (2011). Heritage Building Conservation in Malaysia: Experience and
Challenges. In Procedia Engineering 20, pp. 41-53.
[6] Sofronie, R., Popa, C. and Nappi, A. (1999). Geometrical approach of restoring the
monuments, Proceedings of the International Workshop on Seismic Performance of
Built Heritage in Small Historic Centres, Assisi, Italy, pp. 379-387.
[7] Sofronie, R., Popa, G. and Nappi A. (1999). Long term behaviour of three-lobed
churches, Proceedings of the IASS 40th Anniversary Congress, Madrid, Spain.
Vol.II, pp. 123-130.
[8] Sofronie, R. (2004) Vulnerability of Romanian Cultural Heritage to Hazards and
Prevention Measures. Proceedings of the A.R.C.C.H.I.P. Workshops, Prague,
pp.525-540.
[9] Slave Camelia September (2010). Seismic risk assessment of existing structures,
PhD Thesis, UTCB. pp 19-23.
Online source: https://i.ytimg.com/vi/m1cf4hv_EMQ/maxresdefault.jpg

368
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

INTERDEPENDENCE BETWEEN E-GOVERNANCE AND


KNOWLEDGE-BASED ECONOMY SPECIFIC FACTORS

Mihai Alexandru Botezatu 1*


Claudiu Pirnau 2
Radu Mircea Carp Ciocardia 3

ABSTRACT

The fast expansion of Internet has prompted the introduction of e-Governance at all
levels. The main issue with the services introduced through e-Governance (as well as m-
Governance) is represented by the lack of a technically safe infrastructure. The
connection between services and security involves the development and introduction of
internal governance reforms designed to offer a more citizen-oriented approach through
better integration and coordination between involved organizations/institutions. Due to
the increase of online interactions compared with the offline ones, the pressure for
greater openness and accountability intensified. Its result is formalized by the extension of
e-democracy implementation. This paper is structured in six chapters: introduction; e-
Governance and cybersecurity; e-Governance and e-reputation; e-Governance
development with Microsoft Power BI; the use of Power BI in the analysis of e-
Governance implementation at regional level; case study and conclusions.

KEYWORDS: e-Governance, Cybersecurity, Power BI, Knowledge Society, Big Data

1. INTRODUCTION

E-Governance is “the process of reinvigorating the public sector through digitization and
new information management techniques, a process whose ultimate goal is to increase the
degree of political participation of citizens and the efficiency of the administrative
apparatus.” The approach for implementing e-Governance is always top-down, from state
to citizen. (Figure 1).
Digital development offers a new perspective on the future directions of the Knowledge
Society development. This can only happen in the case of interaction between e-
Governance and the major components of society - state organizations, private
enterprises, academia and civil society organizations [1].
Knowledge society and e-Governance are closely connected, as the latter is one of the
major pillars of the knowledge-based society.

1*
corresponding author, Lecturer PhD, Romanian-American University,
botezatu.mihai.alexandru@profesor.rau.ro
2
PhD Engineer, Politehnica University of Bucharest, claude.pyr@gmail.com
3
Associate Professor, Politehnica University of Bucharest, radumirceacarp@gmail.com

369
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. The sense of e-Governance

The Digital Single Market is one of the top 10 European Commission priorities. The
Digital Single Market Strategy was adopted on 6 May 2015, setting out 16 initiatives to
help consumers, small businesses and industry to fully benefit from the digital single
market, including the digital healthcare development and the uniform implementation of
telemedicine service in Europe.
Knowledge transfer and assessment processes are responsible for the typical association
and relationship between e-Governance, knowledge society and citizens of smart cities.
To this end, it is obvious that government support is provided by private enterprises for
endowment with hi-tech technologies. The growth intervals of e-Governance in the
knowledge-based economy are assessed through specific elements such as: the channels
needed to transmit the knowledge flows, the factors that influence the development of e-
Governance, the specific knowledge regarding processes that require e-Governance, the
regional development of knowledge-based society using “its own capital”, the benefits
and impact of e-Governance for the knowledge based economy [2].
This type of evaluation allows differentiation of e-Governance and m-Governance
development between rural and urban, as well as between countries with different
development levels. The “adoption of e-Governance” variable is composed of the
following indicators:
• Percentage of individuals using the Internet in relation to public authorities in
order to get information;
• Percentage of persons who download forms;
• Percentage of persons who use electronic services to return the completed forms
to the competent public authorities [3].
In this context, the maturity levels of e-Governance are:
• Level 1. Web presence - citizens need to find all the necessary information on the
site. Thus, the government can initiate effective actions through the virtual
environment;

370
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

• Level 2. Interaction - citizens must be able to contact their own government


through its site, for example by using the e-mail service. Public interest items
must be available for download by taxpayers;
• Level 3. Transaction – i.e. online payment facilities;
• Level 4. Transformation (at local, regional and national level) [4];
E-Governance levels have to transform the existing processes into integrated, efficient,
unified and personalized services. This level requires the development of internal and
external communication processes with the business environment and non-governmental
organizations. To prevent cybercrime (more profitable than drug trafficking) and to ensure
the security of information systems, procedures and policies are required, and they have to
be respected by all user categories of computer system. It is important to note that security
policies create a cycle consisting of the following steps: implementation, testing,
monitoring and evaluation (Figure 2) [5,6].

Figure 2. Cycle of Security Policies (Source: Vasiu, I. & Vasiu, L. 2011)

The main security policies are:


• Treating information as an asset;
• Controlled access to information;
• Controlled access to IT networks;
• Authorization by login;
• Individual responsibility;
• Functional responsibilities (implementation of security controls and procedures);
• Protection of intellectual property;
• Secure loans;
• Access to external systems;
• Contingency plans;
• Prohibition of unauthorized IT programs;
• Management of exceptional situations.
Factors contributing to the security efficiency are (Figure 3) [5,7]:

371
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

• The size of the organization;


• Dissuasive and preventive efforts (administration can play an important role in
identifying and targeting organized crime groups, which contribute to money
laundering and hide crime financing);
• Collaborative management;
• Category of the organization's activity;
• Risk management.
The main risks identified in the various activities of the e-Governance process (or related
to it) are presented in Figure 4 [8].

Figure 3. Model of security activity of a computer system


(Source: Vasiu, I. & Vasiu, L. 2011)

A study concerning the systemic risk of the Bank of England was published in mid-2014
and showed that 57% of respondents in multinational enterprises assessed geopolitical
risk as the main challenge. Significant changes took place during that period in terms of
political and military strategies. According to the theory of possibility, every regional
power tests what we can call “the extension freedom for the limits of maneuver” [9].

Figure 4. Specific Risks

372
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Continued geopolitical uncertainty could lead to an increase in the complexity of the


transition phenomenon from simultaneous crisis management to the implementation of
sustainable knowledge based strategies.
In the context of the increasing number of military conflicts, an important role is played by
e-Governance, that has to manage not only the issue of the different types of strategies but
also the new issue related to the expansion of refugee resettlement and integration
phenomenon (depending on their origin, number, gender, qualification, religion, etc.).
Implementation of specific programs can only be achieved through new technologies and IT
systems, which in turn require solving an extremely complex problem: cyber security [10].

2. E-GOVERNANCE AND CYBERSECURITY

Increasing the complexity of IT systems has transformed the world, becoming a major
challenge for e-Governance. The enforcement of specific legislation has evolved,
generating ways to protect against internal threats, while military structures have evolved
primarily to provide support against external threats (admitting that the extent to which
the army is involved in domestic affairs varies from one state to another). Generally,
cyber threats come from outside the borders, making it difficult to enforce the law to
diminish or punish them. However, such threats rarely amount to the level that would
justify a military response. The way governments respond to these challenges can have
implications both at national and international level, depending on the nature of the threat.
In this context, the first challenge is to “understand the nature of the threat.” This includes
recognition of the fact that there is a major difference in perspective within the
international community among those states that prefer to talk about “information
security”, including protecting citizens from what we call “harmful content” and other
states that focus on “cyber-security” as subset of information security policies [11]. The
main steps in ensuring cyber security are shown in Figure 5. Acknowledgement of the fact
that not all cyber attacks are motivated in a similar way is also essential to see how a
government could address these threats. Specialists in this field use different names to
describe the range of cyber attacks, such as “threat”, “spying”, “subversion”, “sabotage”
“cybercrime” and, in very few circumstances, “cyber-war”. The vast majority of these
crimes lie below the threshold of “act of war”. Differences between these categories may
be minimal, but they prove to be important, both legally and politically. In other words, in
the case of a cyber attack, a military response, or even a legal one, is often not the best
answer. This does not mean that cyber-threats below the “war” level should not be taken
seriously [12]. There are also high-risk situations and critical moments that require the
intervention of the army. The military personnel specialized in this area has the cybernetic
ability to support combat in theatres of operations as well as to safeguard their own
systems during peace time. They provide information and warning signals at national and
international level, that usually underpin the most sophisticated cyber operations. Usually,
the military is mission-oriented. This type of actions is better sourced and documented
than other governmental activities, as it is structured in a way that it would create and
develop the needed staff – exactly what would be desirable in order to create an effective
cyber defense unit. Moreover, overloading the army generates challenges for at least two
reasons. The first is the practical risk of creating a crowding-out effect. Given the
“proliferation” of IT systems (IoT, cloud, etc.), cyber security will have to be a discipline

373
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

that everybody in a country takes seriously, not just one thing that citizens and private
companies can expect to outsource to the military.
Any country that depends too much on the army to ensure cyber security will probably
find it in an undesirable situation to reduce incentives that are needed to develop long-
term solutions for the private sector.

Figure 5. Seven Steps to Cybersecurity for Control Systems


(Source: Shrader Engineering Inc. 2015)

Second, but no less worrying, is the risk of domestic security militarization, which in
many countries would be considered a very bad thing. To achieve true cyber-efficiency, it
is necessary to operate permanently on the defended systems. Few private sector
companies are in a position to receive military assistance in cyber security. The central
issue of the role of the army in “defending the nation” against cyber threats is related to
the role and capacity of each government. Naturally and necessarily, there must be other
cyber-security institutions made up of various competent bodies: the police, national
associations to ensure information security, technical centers for computer operations,
technical centers to respond to cyber security incidents, etc. These institutions regularly
participate in NATO or EU template cyber exercises. To this end, Romania is an integral
part of the initiation of the “civil state of cybernetics”, mainly based on the Strategy of
Cybernetics Security in Romania. Innovative agreements, such as the European Council
on Cybercrime in 2001 (with 50 signing parties on all continents), have simplified
international legislation on cybercrime. For example, Microsoft and the US Federal
Investigation Office are working with international partners to detect and annihilate
criminal networks in cyber security. Another potential approach for any government is the
role of the private sector in securing its own security. This can be implemented by
creating an appropriate incentive structure for sharing information between companies

374
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

and raising the standards for cyber security (including through governmental regulations).
This could involve the following categories of activities: the exchange of intelligence
information between private sector companies (users and Internet service providers) in
order to improve defense systems and procedures against cyber attacks; licensing the
private sector to respond to intrusions, so-called “hacking back”; creating additional
emergency training teams to coordinate feedback on the private sector request. At present,
at national level, legislation does not allow hacking-back, as no country wants to take on
the risk of rising conflicts for this reason.
In practice, each country faces different approaches and points of view, the results of
which can take various forms depending on the level and type of cyber attack:
• Defense information theft is probably classified as the most serious threat to
national security and, as a result, it would fully justify governmental involvement
in solving the problem. The motivation of such intrusion may be commercial or
strictly military (depending if the intruder is a potential opponent who is willing
or not to sell the stolen information).
• Counteracting a potential attack on the critical infrastructure for a nation’s
existence (energy, communications, transport, finance, etc.) is another serious
concern, although it is classified as a less immediate threat than the theft of
information concerning national security. Although the army has competencies in
this area, in most countries, a part of the critical infrastructure is under the
administration of private sector, hence making military approaches less practical
or acceptable. The best government approach in this area could be to use
economic incentives, including regulations to improve security levels;
• Actions to protect intellectual property, counteracting commercial or industrial
espionage represent another area where military approaches may not be
appropriate. However, given the potential economic impact, especially where
advanced state-of-the-art threat techniques are used, this type of activity has the
potential to seriously destabilize international relations. In these situations, most
of the time, sanctions against certain countries are imposed;
• Threats concerning cybercrime, although they are not a direct threat, they could
turn into one (in the absence of effective control) because of the potential of
terrorists or certain states to mobilize criminal networks. In general, the
application of specific legislation is very important for this purpose.

3. E-GOVERNANCE AND E-REPUTATION

E-reputation is a fairly recent phenomenon, based on the influence of three elements:


Internet services, e-Governance and cyber security. This indicates the trust and perception
Internet users have of online services offered by government, public administration, or
various organizational categories. This reputation (positive or negative) comes not only
from information produced by a particular entity, but also from stakeholders and
customers, who can easily express satisfaction or dissatisfaction with the quality of online
services and safety in their exploitation.

375
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

There is a quote from a famous American businessman, Warren Buffet: “It takes 20 years
to build a reputation and five minutes to destroy it. If you think about it, you will act
differently.”
The repercussions of the e-reputation phenomenon (also known as digital reputation,
cyber reputation or web reputation) is not limited only to the individual (employee or
client) level but to the image of the entire organization, regions or even countries. All
organizations, including governmental organizations, must include in their future
strategies the implementation of e-reputation monitoring systems which are as necessary
as cybercrime prevention systems.
Also, building a professional reputation must be based on a positive digital reputation.
When a person has leading positions, or one is in a sensitive position (as is the case with
politicians), one needs to make a clear distinction between the public and private areas on
the web (blogs, social networks etc.). In this sense, every individual interested in one’s
image must build a digital strategy for professional development.
Official statistics indicate that every two years the number of employers conducting
Internet surveys on the reputation of future employees has doubled.
According to a poll conducted by the FIPO (French Institute of Public Opinion) - present
in Europe and Asia - for VIP reputation on the internet, 85% of consumers make
purchases and 80% ask before buying based on the digital reputation. According to the
survey, 66% of consumers got recommendations before buying. In 30% of cases, an
unfavorable online review led to abandoning the purchase process. Thus, 96% of Internet
users are influenced by a brand’s e-reputation during a purchase.
Government institutions, following the pattern of a large number of companies, should
hire a community manager whose primary duty would be to maintain the digital
reputation of an organization or brand.
Such a manager needs appropriate tools to help him/her in problem identification
processes, namely finding, filtering, and implementing creative solutions. Microsoft
Power BI is a self-service analysis solution (to optimize decision-making), now
considered a “democratization” of ERP and CRM solutions.

4. DEVELOPING E-GOVERNANCE WITH MICROSOFT POWER BI

One of the important goals of Power BI projects was also to visualize and monitor the
models on the front end for example: Power BI serves this function by displaying data
sets drawn directly from cloud sources, Azure HDInsight, and SQL Database on several
large screens in Arvato’s monitoring center (Arvato Bertelsmann SE & Co – Improving
fraud recognition with Microsoft Azure) [13].
Being similar to the SaaS packages, Power BI allows organizing and sharing dashboards,
reports, and data sets. Power BI reports can be published within organizational packages
specific to each team. These are easy to find - being managed in one location, the content
gallery. Because they are part of Power BI, they allow the use of all categories of tools,
including interactive data exploration, visualizations, Q & A, integration with other data
sources, refresh data, and more [14].

376
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

A unique feature of Power BI is the ability to connect directly to on-premise data sources,
including SQL Server Analysis Services (SSAS), SQL Server, and so on. Figure 6 shows
an example of a live SSAS connection.
The Analysis Services Connector function, integrated with Power BI, allows live queries
in SSAS tabular models. Cloud data move or scheduling of previous data updates is not
needed- reports and data can be viewed in real-time through dashboards, after which the
data management process can continue using various other methods/models specific to the
organization [13,14].
Connectivity Services Analyzer is a client agent that enables Power BI to connect to local
SQL Analysis Services instances.

Figure 6. Example of Live Connection to Power BI

When a user browses a Power BI based on SSAS data, Power BI issues queries about data
expression analysis (DAX) to the connector, which acts as a proxy between Power BI and
SSAS. The connector transmits the name of the new user to an authorized user through
the Azure Active Directory service and applies the existing SSAS security permissions,
based on roles. The connector then interrogates the local SSAS cube to return the data,
and the cached connection optimizes the performance of the query [15].
Communication between the connector and Power BI is achieved through the Azure
Service Bus, which creates a secure SSL channel between the Power BI service and local
area data through an output port. This process does not require the opening of an entry
port in the local protection system.
Before users can access data from an SSAS database, the Analysis Services Connector
must be installed in that location. The connector can be installed on any server that has
access to the web and to the relevant instance for analysis services.
When a company’s Active Directory is federated with Azure the authentication process
works automatically. If there is no federation with Azure, activation of authentication is
possible via an additional configuration.

377
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

With Microsoft Power BI, Desktop or Microsoft Excel, business analysts can import data
from a wide range of localized data sources, and then publish them in Power BI.
Microsoft Power BI Personal Gateway enables data management and synchronization so
that reports and dashboards in Power BI could be always up to date.
Power BI integrates with other cloud services, including Azure SQL Database, Azure
SQL Database Auditing, and Azure Stream Analytics. By expanding Azure-specific
capabilities in Power BI, integrated BI solutions can be created without interruption. For
example, we can use Azure Stream Analytics to process streaming data, after which they
can be exported to Power BI, allowing the dashboard to be updated in real time.
Excel and Power BI Desktop files can be published directly into Power BI, simplifying
the generation of dashboards and real-time reports. When uploading a file, Power BI can
automatically improve data by detecting key features. For example, if a table in a loaded
Excel file includes a data field, Power BI can automatically create columns of the month
and year type to facilitate reporting based on these elements.
Uploading Excel files can be done from a computer or by connecting them to OneDrive
for Business or OneDrive Personal. The advantage of connecting to the OneDrive
workbooks is that any changes to a workbook will automatically appear in the dashboard
and reports connected to the Power BI workbook.
Power BI supports files with advanced data models, such as Power BI Desktop files and
Excel files with Power Pivot data models. When an Excel workbook is loaded with a
Power Pivot data model, Power BI loads the entire data model to increase the level of
complexity of the applications. The same is true for Power BI Desktop files [16].
Uploading files from Power BI Desktop allows overlapping data from a variety of sources
that do not connect directly to Power BI. For example, when using Power BI to explore
data from Facebook, a SharePoint list or its own SAP system, data can be accessed
through Power BI Desktop, and then a report can be generated and published later in
Power BI. Similarly, Power BI Desktop allows you to connect to data from multiple
sources.

5. USING POWER BI TO ANALYZE THE IMPLEMENTATION OF E-


GOVERNANCE AT REGIONAL LEVEL. CASE STUDY.

In the EU, according to Eurostat statistics, filling in and submitting income tax returns,
looking for job vacancies and online visits to public libraries are the services with the
highest percentage of users. However, in general, citizens’ interest in online income tax
forms and job search is lower than interest in other categories of e-services [17,18].
In this case study, conducted on November 11, 2017, we analyzed the issues related to job
vacancy offered through www.posturi.gov.ro. They are part of the category of jobs
belonging to public institutions.
The positions occupied in public institutions and authorities are classified as follows:
I. Central public administration, out of which:
1. Institutions fully financed by the state budget;
2. Institutions fully funded by social security budgets;

378
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

3. Institutions subsidized by the state budget and the unemployment insurance budget;
4. Institutions fully financed by their own income.
II. Local public administration, out of which:
1. Institutions fully funded by local budgets;
2. Institutions fully or partially funded by their own income.
Using Microsoft Power BI, we imported a csv. file which contains the job vacancy status
in each county, as shown in Figure 7.
To make various charts on vacancies status, we used the specific tools provided by Power
BI as shown in Figure 8. The existence of such diagrams can help us in the development
and implementation of HR model strategies.

Figure 7. Importing data into Power BI

For the purpose of smart and sustainable development, job openings subject needs to be
matched with elements such as human resource planning, number of people able to work,
human resource crisis, flexibility in employment, demographic statistics (the age
pyramid), the role of episodic memory (the link between memories and future plans), etc.

379
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 8. The main graphical representations in Power BI

The diagram in Figure 9 shows the variation of job openings in public institutions,
depending on the type of employer. As it can be seen, the highest number of new
employments happened in local institutions (representing about 60% of total new
employments), followed by those in city halls (approximately 30%). Employment
according to the required level of qualification (figure 10) shows an exponential increase
in the technician positions (1046), compared to the management positions (147).

Figure 9. Variation in job positions by employer

380
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 10. Employment by qualification

When considering variation in the number of job openings in the public system by county
(Figure 11), the top leaders are Bucharest, Timis and Constanta, while the employment in
the private sector (dominated by the auto and food industry) is lead by Ilfov, Bistrita-
Nasaud and Timis counties.

Figure 11. Variation in job positions by county

The regional map of vacancies on 11.11.2017, according to www.posturi.gov.ro, is


presented in Figure 12. Thus, we can see that the counties with the most vacancies (in the
public system) are: Constanta, Iasi, Brasov, Cluj and Alba.
In each development region there are statistics regaarding the correlations between
number of jobs and income [19]. For example, in the Center Region, in 2016, Sibiu
County was number one in the top of the highest salary earners - 1.997 RON, followed by
Brasov - 1.827 RON, Mures -1.708 RON, Alba -1.689 RON, Harghita -1.373 RON and
Covasna -1.420 RON. At the national level, Bucharest, Ilfov, Cluj, Timis and Sibiu are
the counties where the employees earned the highest net salaries in 2016, between 2.138
and 2.857 RON net per month.

381
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 12. Regional map of vacancies on 11.11.2017


(Source: www.posturi.gov.ro)

6. CONCLUSIONS

E-Governance tasks and decisions are generated, transmitted and implemented, first and
foremost, depending on the efficiency (models, procedures, schemes, etc.) with which
governments use their own leadership to develop national cyber security strategies. The
main approaches to this end must answer the following questions: Given the number and
complexity of the variables, how involved should be a government official to define cover
cyber security at national level? Given the involved factors (the rule of the game), how
should governments balance their investment in cyber security and law enforcement both
in the state and the private sector? What would be the most effective methods of involving
military specialists in supporting the private sector in the event of cyber attacks (when
required)? What would be the most effective ways of facilitating international civil
society cooperation in the field of cyber security? How can diplomatic initiatives reduce
the need to use the army to ensure internal cyber security? What are the methods that can
be used by a government to avoid international disputes over cyber issues that would
undermine IT security cooperation? The political dimension of a conflict (military or not
military) is an unremitting challenge for all the institutions involved. Effective
implementation of e-Governance contributes to increasing the level of cooperation and the
number of interactions between policy decision-makers, thus increasing the chances of
diplomatic settlement of conflicts. At local level, the increase in public sector
employment has enhanced the level of taxpayers’ satisfaction in their interaction with
civil servants. Currently we are witnessing a paradox of e-Governance: in rural localities,
where the number of Internet users is reduced, customer satisfaction increases as long as
the number of interactions at the counter increases between the taxpayer and the civil
servant; in the case of urban localities, customer satisfaction increases with the emergence
of new e-services, which implicitly leads to a decrease in the number of interactions at the
counter between the two parties involved. The increasing or decreasing number of
employees in the public sector may generate fluctuations depending on: the variation in
the number of institutions (management of change may lead to the establishment or
termination of some public bodies); the degree of implementation of e-Governance; salary

382
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

increases or decreases generated by changes in the five factors that govern the economy:
ownership, information, demand law, substitutes and inflation. Digital transformation
involves the interaction of four dimensions: service, security, transparency and trust. The
implementation of the seven dimensions of Big Data (variety, volume, velocity, value,
variability, visualization and veracity) in the context of the seven dimensions of
sustainable development (human being, culture, political life, economy, nature, society
and spirit) is the determining factor of digital transformation through e-Governance.

REFERENCES

[1] Barbara, A.A. Luc, J. Gilles, P. Jeffrey, R. E-Governance & Government Online in
Canada: Partnerships, People & Prospects, Centre on Governance, University of
Ottawa, Canada, 2001.
[2] Khanh, N.T.V. The critical factors affecting E-Government adoption: A Conceptual
Framework in Vietnam, School of IT Business, SOOGSIL University, Seoul, South
Korea, 2014.
[3] *** E-Government and E-Democracy in Switzerland and Canada. Using online
tools to improve civic participation. Summary report of a roundtable discussion,
Ottawa, Ontario, April 8, 2011.
[4] Finger, M. Pécoud, G. From e-Government to e-Governance? Towards a model of
e-Governance, Swiss Federal Institute of Technology, Lausanne, Switzerland, 2010.
[5] Vasiu, I. Vasiu, L. Criminalitatea în cyberspaţiu, Editura Universul Juridic,
Bucureşti 2011.
[6] Kelvin, J.B. Stephen, M. Digital Solutions for Contemporary Democracy and
Government, IGI Global, USA, 2015.
[7] Fang, Z. E-Government in Digital Era: Concept, Practice, and Development, School
of Public Administration, National Institute of Development Administration,
Thailand, 2002.
[8] Wirtz, B.W. Daiser, P. E-Government. Strategy Process Instruments, German
University of Administrative Sciences Speyer, ISBN 978-3-00-050445-7, 1st
edition, September 2015.
[9] Didraga, O. Managementul riscurilor în proiectele de e-guvernare din România,
Colecţia Cercetare avansată postdoctorală în ştiinţe economice, Editura ASE
Bucureşti, 2015.
[10] Ailioaie, S. Hera, O. Kertesz, S. Ghidul de e-Democraţie şi Guvernare Electronică,
Ghid realizat pentru Parlamentul României, Octombrie 2001.
[11] Tăbuşcă, A. Established Ways to Attack Even the Best Encryption Algorithm,
Journal of Information Systems & Operations Management, Vol.5, No.2.1/2011,
Ed. Universitară, ISSN 1843-4711, 2011.
[12] “Electrical, Communications and Technology Systems for critical infrastructure
projects” http://www.shrader.net (consulted in November 2017).

383
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[13] Lachev, T. Applied Microsoft Power BI (2nd Edition): Bring your data to life!
Microsoft Data Analytics, 2017.
[14] https://powerbi.microsoft.com/en-us/documentation/powerbi-desktop-getting-
started.
[15] https://community.powerbi.com /t5/ Data- Insights-Summit- 2017-On/Take-Power-
BI-Visualization-to-the-Next-Level/m-p/197936.
[16] Căruţaşu, G. Pirnau, M. Facilities and changes in the educational process when
using Office365, Journal of Information Systems & Operations Management, Vol.
11 Issue 1, pp. 29-41, May 2017.
[17] http://ec.europa.eu/eurostat/ statistics- explained/ index.php /Archive:E-
government_ statistics#Use_ of_e-government_services_by_employment_situation.
[18] http://statistici.insse.ro/shop/index.jsp?page=tempo3&lang=ro&ind=FOM104B.
[19] Botezatu Mihai Alexandru, “Modele de analiză în studiul forţei de muncă din
România“, Editura Pro Universitaria, ISBN 978-606-26-0308-3, Bucureşti, 2015

384
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

AGE DIFFERENCES IN RESPONSES TO MARKETING


COMMUNICATION TECHNIQUES USED IN ONLINE SOCIAL
NETWORKS

Alexandra Perju-Mitran 1*
Andreea-Elisabeta Budacia 2

ABSTRACT

In order to demonstrate that there are age differences in the way online consumers react
to online marketing communication techniques, the study builds on a previously tested
and validated empirical model showing the influence of online marketing communication
via social networks on behavioral intentions, by continuing in a structural equation
modeling approach. Significant differences between users of different age categories are
found and implications for online communication practitioners are discussed, with
strategic proposals stemming from these results. The study addresses the manner in which
potential consumers of different ages react to and examine online social media marketing
communication efforts, and how their perceptions influence various intentions. By
drawing from theories of consumer behavior, a previously confirmed model for online
user behavior in response to online marketing messages is tested for each age group. The
results demonstrate that direct and positive links between the user perceptions of online
marketing communication, and direct and positive links between users’ attitudes towards
online communication and their intentions vary in strength with different age groups.
Conclusions also feature strategic communication proposals, based on the findings.

KEYWORDS: consumer behavior, online communication, structural equation modeling,


online social networks, promotional techniques, generational cohorts

INTRODUCTION

New marketing communication efforts must focus on dialogue, given the interactive
character of the social media environment.[1] Online media tools now allow practitioners
to maintain an open dialogue with consumers and influence their intentions.
Age as a variable is strongly correlated with the level of social media platforms usage.
Today, 90% of young adults use social media, a 78-percentage point increase since 2005.
There has also been a 69- point bump among users ages 30-49, to 77% today. While
usage among young adults leveled off as early as 2010, since then there has been a surge
in use among seniors, at over 35%.[2] Studies focused on how different people use the

1
* corresponding author, Assistant Professor PhD, Romanian-American University, Faculty of European
Economic Studies, Bucharest, alexandraperju@gmail.com
2
Associate Professor PhD, Romanian-American University, Faculty of Management-Marketing, Bucharest,
Romania, budacia.andreea@profesor.rau.ro

385
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Internet have found systematic differences across user types among their online pursuits.
[3][4][5], thus it is worth considering whether social networking site users may also react
differently towards various marketing communication stimuli, otherwise researchers and
practitioners risk unintentionally excluding entire age groups in the suggestion of online
communication strategies.
By adding to our own work on a previously tested and validated model [1], and
examining the effects of age as a variable in complementing the research for our
previously confirmed model on a relevant sample of 1097 Romanian social networking
site users, results that may aid researchers in explaining the inconsistencies of prior online
social media communication research, thus providing a better understanding of age-based
preferences and likely responses to marketing communication messages via social
networking sites.

1. CONCEPTUAL MODEL AND OBJECTIVES

In order to show the influences on user intent, variables and links were formulated in
accordance with the Theory of Planned Behavior (TPB), used to examine relationships
between variables and the individual's intention to exhibit a certain behavior [6], as
participation in online communication and the intentions of social networking site
members to assimilate marketing information conveyed by companies, to share it or to
become loyal to a brand or company are all volitional behaviors. TPB has also been
widely used in the exploration of variables influencing behavior of Internet users. [7] All
the scales used exhibited high internal consistency in our previous research, and the large
sample size allowed for a generational cohort comparison of Romanian social networking
site users.
In the online environment, multiple studies have established the mediating role of attitude
in the relationship between stimuli and online purchase intention and word of mouth
generation (e.g. [8-10]).
The structural model was based on the Partial Least Squared (PLS) regression algorithm
and includes the standardized β coefficients and R squared for each endogenous variable
used in quantifying the variation of the variables which can be explained by variation in
other variables. Model results are shown in both Figure 1.

386
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. Structural Equation Model Results


(Source: Perju-Mitran et.al., 2014, p.251)

The present study has two main objectives:


O1: To prove that previously hypothesized variable connections are valid regardless of
age category;
O2: To show the effect sizes in model relations in accordance with age categories.
In order to accomplish objective no. 2, a restricted PLS regression algorithm (structural
equation modeling by means of the partial least squares method) model analysis was
performed for each studied age group.
Regarding the distribution of social media users by age, the respondents were grouped as
can be seen in Fig. 2.

387
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 2. Respondents by age groups

2. METHODOLOGY

Basic demographic information was measured using standard modes of


operationalization. Respondents were asked to pick their age category, restricted to: under
20 years old, 20 to 35, 36 to 50, and over 50.
In digital marketing research and in consumer behavior modeling in particular, the study
of control variables is imperative, indicating the characteristics of respondents (such as
demographic variables). Since a structural model has been previously validated in this, the
effects of “Age” as a control variable on the proposed model will be studied.
To find out whether there are differences in the intensity of causal links according to the
respondent's age, we started from the premise that all causal relationships defined in the
model remain significant regardless of age and redefined our model by introducing the
“Age” variable, and creating causal relationships between “Age” and each latent variable
of the model.

3. RESULTS

As per our first objective, the results of introducing the “Age” variable into the model are
presented in Fig. 3. and Table 1.

388
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 3. Model with “Age” Variable

Table 1. Path coefficients and p values for the "Age" control variable

Trust Useful Inf Relev Atit Intinf Idistr Iloyal

Trust

0.804
Useful
p<0.001

Inf

Relev

0.494 0.423 0.339 0.424


Atit
p<0.001 p<0.001 p<0.001 p<0.001
0.668
Intinf
p<0.001
0.668
Idistr
p<0.001
0.665
Iloyal
p<0.001

Based on the new β and p values for the control variables, the hypotheses underlying the
previous model [1] are rechecked

389
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 2. Testing hypotheses for the "Age" control variable

Nr. Hypotheses β (P) Valid


Ip.
H1 There is a direct and positive relationship between the yes
“Informative character” (Inf) of the promotional messages sent 0.804
by companies through the online social platform and the
“Perceived usefulness” (Useful) of the promotional messages (<0.001)
sent regardless of the age of the user.
H2 There is a direct and positive relationship between the user's yes
“Trust” in the messages sent by companies via the online 0.494
social platform and the “Attitude” (Atit) towards the messages
sent by companies through the online social platform, (<0.001)
regardless of the user's age.
H3 There is a direct and positive relationship between the yes
“Perceived usefulness” of promotional messages sent by 0.423
companies through the online social platform and the
“Attitude” towards the messages sent by companies through (<0.001)
the online social platform, regardless of the age of the user..
H4 There is a direct and positive relationship between the yes
“Informative character” of the promotional messages sent by 0.339
companies through the online social platform and the
“Attitude” towards the messages sent by companies through (<0.001)
the online social platform, regardless of the age of the user.
H5 There is a direct and positive relationship between the yes
“Relevance” of the promotional messages sent by companies 0.424
through the online social platform and the “Attitude” towards
the messages sent by companies through the online social (<0.001)
platform, regardless of the age of the user.
H6 There is a direct and positive relationship between the yes
“Attitude” towards the messages sent by companies through 0.668
the online social networking platform and the “Intention to
use” (Intinf) the information provided by companies through (<0.001)
online social platforms, regardless of the user’s age.
H7 There is a direct and positive relationship between the yes
“Attitude” towards the messages sent by companies through the 0.668
online social platform and the “Intention to distribute” (Idistr) (<0.001)
the information within the social platform, regardless of age.
H8 There is a direct and positive relationship between the yes
“Attitude” towards messages sent by companies through the 0.665
online social platform and the “Intention to become loyal” (<0.001)
(Iloyal) to the company, regardless of the user’s age.

390
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

From testing hypotheses for the age control variable, it can be noticed that the causal
relationships in the analysis are maintained irrespective of the user’s age. We can say with
certainty that a positive attitude influences the intentions (to acquire supplementary
information, to distribute the information, to become loyal to the company or brand)
directly and positively, regardless of age, thus competing our first objective.
In order to test relationships in control groups, each relationship will be analyzed for 3
age groups (under 20 years and 20-35 years, 36-50 years, over 50 years), as the "under
20" group does not has representativeness by itself. We take into consideration the
significance threshold p and the magnitude of the effect, represented by Cohen's f-squared
coefficients (0.02-0.14 small, 0.15-0.34 medium, 0.35> high).[11]
Table 3. Path coefficients, p values and effect sizes for the “under 20-35” group

Variable Trust Useful Inf Relev Atit Intinf Idistr Iloyal

Trust
0.843
Useful p<0.001
0.711
Inf

Relev
0.246 0.349 0.062 0.492
Atit p<0.001 p<0.001 p=0.249 p<0.001
0.155 0.242 0.043 0.370
0.691
Intinf p<0.001
0.478
0.662
Idistr p<0.001
0.438
0.709
Iloyal p<0.001
0.503
From the start, we notice the invalidation of the Information-Attitude link for this group,
due to the non-observance of the significance threshold p <0.05, which means that
regardless of the intensity difference signaled, the effect has no statistical validity.
Differences in effect sizes will be discussed in detail following the presentation of p
values and effect sizes in the case of the age variable presented for each age group, where
there is a significant difference between groups.

391
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 4. Path coefficients, p values and effect sizes for the “36-50” group
Variable Trust Useful Inf Relev Atit Intinf Idistr Iloyal
Trust
0.588
Useful P=0.002
0.345
Inf
Relev
0.433 0.164 0.096 0.470
Atit p<0.001 P=0.002 P=0.021 p<0.001
0.329 0.104 0.043 0.363
0.678
Intinf p<0.001
0.460
0.716
Idistr p<0.001
0.413
0.632
Iloyal p<0.001
0.399

Table 5. Path coefficients, p values and effect sizes for the “over 50” group

Variabila Trust Useful Inf Relev Atit Intinf Idistr Iloyal


Trust
0.813
Useful p<0.001
0.833
Inf
Relev
0.363 0.673 -0.028 -0.013
Atit P=0.012 p<0.001 P=0.401 P=0.038
0.361 0.671 0.028 0.009
0.689
Intinf p<0.001
0.475
0.564
Idistr p<0.001
0.318
0.797
Iloyal p<0.001
0.635

392
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Based on the new p values and effect sizes in the case of the age variable, in the tables of
the path coefficients and the p values, we note the following:
1. The direct and positive relationship between the informative character of the
promotional messages sent by companies through the online social platform and the
perceived usefulness of the promotional messages sent is more intense for the "under
20-35" group and the "over 50" group (high effect) , with a significant difference in
intensity between the three groups at a significance threshold p <0.05
2. The direct and positive relationship between the user's trust in the messages sent
by companies through the online social platform and the attitude towards the
messages sent by companies through the online social platform is higher in the case
of the "over 50" group, with a significant difference intensity between the three
groups.
3. The direct and positive relationship between the perceived usefulness of
promotional messages sent by companies through the online social platform and the
attitude towards the messages sent by companies through the online social platform
is higher in the case of the "over 50" group, with a significant difference in intensity
between groups.
4. The direct relationship between the informative character of the promotional
messages sent by companies via the online social platform and the attitude cannot be
validated in case of differences between groups, the significance threshold exceeding
the value of 0.05.
5. The direct and positive relationship between the relevance of promotional
messages sent by companies through the online social platform and the attitude
towards the messages is stronger in the case of the "under 20-35" (high effect) and
"36- 50 "(high effect), with a significant difference in intensity from the" over 50
"(low effect) group.
6. In the direct and positive relationship between the attitude towards the messages
sent by companies through the online social platform and the intention to use the
information (further inform oneself), the difference between the effects is
insignificant.
7. The direct and positive relationship between the attitude towards the messages
sent by companies through the online social platform and the intention to distribute
the information within the social platform is more intense in the case of the "below
20-35" and "36-50" groups high), with an existing significant difference in intensity
between groups.
8. In the direct and positive relationship between the attitude towards the messages
sent by companies via the online social platform and the intention to become loyal to
the company or brad, the difference between the groups is insignificant.

393
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

4. CONCLUSIONS AND IMPLICATIONS

With reference to the study objectives and our findings supported by the SEM analysis
results, we are able to formulate a series of strategic proposals, in accordance with the
social media consumers’ age group.
First, in the case of users up to 35 years old (young consumers) we suggest marketing
communication approaches highlighting the messages’ informative character and
relevance, with welcome interventions undertaken at the cognitive and symbolic levels,
with the objective and an affective and/or symbolic positioning. This group will respond
favorably to strategic approaches that stimulate communication and participation, that
engage them in conversation, making it easier to achieve word-of-mouth when the attitude
towards communication effort is favorable.
Second, in the case of users 36 to 50 years old, we recommend trust building strategies
and highlighting relevance. This group is prone to sharing the information received as
long as the source is trustworthy and relevant to their interests, strategies that underline
the usefulness factor to stimulate communication and participation. Interventions at the
affective and conative level will generate more favorable responses, as long as emphasis
is placed on affective and symbolic positioning.
Third, in the case of the over 50 group, we suggest trust-building communicational
strategies, and offering options that highlight the usefulness and informative characters, in
which cognitive interventions will preferably be undertaken through objective
positioning.
The present study brings insights for corporate communication, showing that potential
consumers under the age of 35 are prone to advocate on behalf of new brands or
companies, driven by perceptions of company’s perceived relevance in the online social
media space. Explaining connections between perceptions of promotional messages,
brand perception, and WOM propensity, the current study adds contributions to the
previous findings on consumer stereotypes [12], applied to consumer-company
interactions, by differentiating age groups and highlighting generational cohort behavior.
Future research is encouraged to investigate the “under 20” and “20-35” groups
separately, as in the present case we lacked representativeness for users under the age of
20. Future research may also explore hidden mediating variables that may hinder the
formation of intentions for different age groups.

REFERENCES

[1] A. Perju-Mitran, C. I. Negricea, T. Edu. Modelling The Influence Of Online


Marketing Communication On Behavioural Intentions, Network Intelligence
Studies Volume 2, 2014, pp. 245-253.
[2] A. Perrin. Social Networking Usage: 2005-2015. Pew Research Center. October
2015. Available at: http://www.pewinternet.org/2015/10/08/2015/Social-
Networking-Usage-2005-2015/

394
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[3] E. Hargittai. Whose space? Differences among users and non‐users of social
network sites, Journal of Computer‐Mediated Communication, Volume 13, Issue 1,
2007, pp.276-297.
[4] E. Hargittai. A framework for studying differences in people’s digital media uses. In
Grenzenlose Cyberwelt?, 2007, pp. 121–137, Ed. S. Iske, A. Klein, N. Kutscher,
H.-U Otto, Springer, Berlin, Germany
[5] S. Livingstone, E. Helsper. Gradations in digital inclusion: Children, young people,
and the digital divide. New Media and Society, Volume 9, Issue 4, 2007, pp. 671–
696.
[6] I. Ajzen. The theory of planned behaviour, Organizational Behaviour and Human
Decision Processes, Volume 50, Issue 2, 1991, pp.179– 211.
[7] N. Park, A. Yang. Online environmental community members’ intention to
participate in environmental activities: An application of the theory of planned
behaviour in the Chinese context, Computers in Human Behaviour, Volume 28,
Issue 4, 2012, pp. 1298–1306.
[8] F. Rasty, C. Chou, D. Feiz. The impact of internet travel advertising design,
tourists' attitude, and internet travel advertising effect on tourists' purchase
intention: the moderating role of involvement, Journal of Travel & Tourism
Marketing, Volume 30, Issue 5, 2013, pp. 482-496.
[9] J. S. Stevenson, G. C. Bruner, A. Kumar. Webpage background and viewer
attitudes, Journal of Advertising Research, Volume 40, Issue 1-2, 2000, pp. 29-34.
[10] S. I. Wu, P. L. Wei, J. H. Chen. Influential factors and relational structure of
Internet banner advertising in the tourism industry, Tourism Management, Volume
29, Issue 2, 2008. pp. 221-236.
[11] J. Cohen, Statistical power analysis for the behavioral sciences, Ed. Hillsdale, NJ:
Lawrence Erlbaum. . 1988. ISBN: 978-0-12-179060-8, Elsevier, Amsterdam,
Netherlands.
[12] A.G. Andrei, A. Zait, E.M. Vatamanescu, F. Pinzaru. Word-of-mouth generation and
brand communication strategy: Findings from an experimental study explored with
PLS-SEM, Industrial Management & Data Systems, Volume 117, Issue 3, 2017, pp.
478-495.

395
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

JOURNAL
OF
INFORMATION SYSTEMS &
OPERATIONS MANAGEMENT

ISSN: 1843-4711
---
Romanian-American University
No. 1B, Expozitiei Avenue
Bucharest, Sector 1, ROMANIA
JISOM.RAU.RO
office@jisom.rau.ro

390

You might also like